00:00:00.001 Started by upstream project "autotest-per-patch" build number 132782 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.124 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.979 The recommended git tool is: git 00:00:00.979 using credential 00000000-0000-0000-0000-000000000002 00:00:00.982 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.992 Fetching changes from the remote Git repository 00:00:00.997 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:01.010 Using shallow fetch with depth 1 00:00:01.010 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:01.010 > git --version # timeout=10 00:00:01.020 > git --version # 'git version 2.39.2' 00:00:01.021 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:01.031 Setting http proxy: proxy-dmz.intel.com:911 00:00:01.031 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.793 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.806 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.821 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.821 > git config core.sparsecheckout # timeout=10 00:00:07.834 > git read-tree -mu HEAD # timeout=10 00:00:07.850 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.878 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.879 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.999 [Pipeline] Start of Pipeline 00:00:08.010 [Pipeline] library 00:00:08.011 Loading library shm_lib@master 00:00:08.011 Library shm_lib@master is cached. Copying from home. 00:00:08.023 [Pipeline] node 00:00:08.031 Running on VM-host-WFP1 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:08.033 [Pipeline] { 00:00:08.041 [Pipeline] catchError 00:00:08.043 [Pipeline] { 00:00:08.054 [Pipeline] wrap 00:00:08.062 [Pipeline] { 00:00:08.070 [Pipeline] stage 00:00:08.072 [Pipeline] { (Prologue) 00:00:08.087 [Pipeline] echo 00:00:08.089 Node: VM-host-WFP1 00:00:08.094 [Pipeline] cleanWs 00:00:08.103 [WS-CLEANUP] Deleting project workspace... 00:00:08.103 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.109 [WS-CLEANUP] done 00:00:08.328 [Pipeline] setCustomBuildProperty 00:00:08.391 [Pipeline] httpRequest 00:00:09.102 [Pipeline] echo 00:00:09.103 Sorcerer 10.211.164.101 is alive 00:00:09.110 [Pipeline] retry 00:00:09.112 [Pipeline] { 00:00:09.127 [Pipeline] httpRequest 00:00:09.132 HttpMethod: GET 00:00:09.132 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.133 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.159 Response Code: HTTP/1.1 200 OK 00:00:09.159 Success: Status code 200 is in the accepted range: 200,404 00:00:09.159 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:28.446 [Pipeline] } 00:00:28.463 [Pipeline] // retry 00:00:28.470 [Pipeline] sh 00:00:28.756 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:28.772 [Pipeline] httpRequest 00:00:31.789 [Pipeline] echo 00:00:31.790 Sorcerer 10.211.164.101 is dead 00:00:31.797 [Pipeline] httpRequest 00:00:33.597 [Pipeline] echo 00:00:33.600 Sorcerer 10.211.164.101 is alive 00:00:33.610 [Pipeline] retry 00:00:33.612 [Pipeline] { 00:00:33.626 [Pipeline] httpRequest 00:00:33.632 HttpMethod: GET 00:00:33.633 URL: http://10.211.164.101/packages/spdk_496bfd677005e62b85d6d26bda2d98fe14c1b5fc.tar.gz 00:00:33.634 Sending request to url: http://10.211.164.101/packages/spdk_496bfd677005e62b85d6d26bda2d98fe14c1b5fc.tar.gz 00:00:33.663 Response Code: HTTP/1.1 200 OK 00:00:33.669 Success: Status code 200 is in the accepted range: 200,404 00:00:33.670 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_496bfd677005e62b85d6d26bda2d98fe14c1b5fc.tar.gz 00:05:14.252 [Pipeline] } 00:05:14.271 [Pipeline] // retry 00:05:14.278 [Pipeline] sh 00:05:14.563 + tar --no-same-owner -xf spdk_496bfd677005e62b85d6d26bda2d98fe14c1b5fc.tar.gz 00:05:17.108 [Pipeline] sh 00:05:17.390 + git -C spdk log --oneline -n5 00:05:17.390 496bfd677 env: match legacy mem mode config with DPDK 00:05:17.390 a2f5e1c2d blob: don't free bs when spdk_bs_destroy/spdk_bs_unload fails 00:05:17.390 0f59982b6 blob: don't use bs_load_ctx_fail in bs_write_used_* functions 00:05:17.390 0354bb8e8 nvme/rdma: Force qp disconnect on pg remove 00:05:17.390 0ea9ac02f accel/mlx5: Create pool of UMRs 00:05:17.408 [Pipeline] writeFile 00:05:17.422 [Pipeline] sh 00:05:17.703 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:05:17.714 [Pipeline] sh 00:05:17.993 + cat autorun-spdk.conf 00:05:17.993 SPDK_RUN_FUNCTIONAL_TEST=1 00:05:17.993 SPDK_TEST_NVMF=1 00:05:17.993 SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:17.993 SPDK_TEST_URING=1 00:05:17.993 SPDK_TEST_USDT=1 00:05:17.993 SPDK_RUN_UBSAN=1 00:05:17.993 NET_TYPE=virt 00:05:17.993 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:17.999 RUN_NIGHTLY=0 00:05:18.001 [Pipeline] } 00:05:18.017 [Pipeline] // stage 00:05:18.031 [Pipeline] stage 00:05:18.033 [Pipeline] { (Run VM) 00:05:18.045 [Pipeline] sh 00:05:18.325 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:05:18.325 + echo 'Start stage prepare_nvme.sh' 00:05:18.325 Start stage prepare_nvme.sh 00:05:18.325 + [[ -n 6 ]] 00:05:18.325 + disk_prefix=ex6 00:05:18.325 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:05:18.325 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:05:18.325 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:05:18.325 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:18.325 ++ SPDK_TEST_NVMF=1 00:05:18.325 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:18.325 ++ SPDK_TEST_URING=1 00:05:18.325 ++ SPDK_TEST_USDT=1 00:05:18.325 ++ SPDK_RUN_UBSAN=1 00:05:18.325 ++ NET_TYPE=virt 00:05:18.325 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:18.325 ++ RUN_NIGHTLY=0 00:05:18.325 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:05:18.325 + nvme_files=() 00:05:18.325 + declare -A nvme_files 00:05:18.325 + backend_dir=/var/lib/libvirt/images/backends 00:05:18.325 + nvme_files['nvme.img']=5G 00:05:18.325 + nvme_files['nvme-cmb.img']=5G 00:05:18.325 + nvme_files['nvme-multi0.img']=4G 00:05:18.325 + nvme_files['nvme-multi1.img']=4G 00:05:18.325 + nvme_files['nvme-multi2.img']=4G 00:05:18.325 + nvme_files['nvme-openstack.img']=8G 00:05:18.325 + nvme_files['nvme-zns.img']=5G 00:05:18.325 + (( SPDK_TEST_NVME_PMR == 1 )) 00:05:18.325 + (( SPDK_TEST_FTL == 1 )) 00:05:18.325 + (( SPDK_TEST_NVME_FDP == 1 )) 00:05:18.325 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:05:18.325 + for nvme in "${!nvme_files[@]}" 00:05:18.325 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:05:18.325 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:05:18.325 + for nvme in "${!nvme_files[@]}" 00:05:18.325 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:05:18.325 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:05:18.325 + for nvme in "${!nvme_files[@]}" 00:05:18.325 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:05:18.584 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:05:18.584 + for nvme in "${!nvme_files[@]}" 00:05:18.584 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:05:18.584 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:05:18.584 + for nvme in "${!nvme_files[@]}" 00:05:18.584 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:05:18.584 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:05:18.584 + for nvme in "${!nvme_files[@]}" 00:05:18.584 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:05:18.841 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:05:18.841 + for nvme in "${!nvme_files[@]}" 00:05:18.841 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:05:19.098 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:05:19.098 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:05:19.098 + echo 'End stage prepare_nvme.sh' 00:05:19.098 End stage prepare_nvme.sh 00:05:19.109 [Pipeline] sh 00:05:19.395 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:05:19.395 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -H -a -v -f fedora39 00:05:19.395 00:05:19.395 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:05:19.395 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:05:19.395 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:05:19.395 HELP=0 00:05:19.395 DRY_RUN=0 00:05:19.395 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img, 00:05:19.395 NVME_DISKS_TYPE=nvme,nvme, 00:05:19.395 NVME_AUTO_CREATE=0 00:05:19.395 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img, 00:05:19.395 NVME_CMB=,, 00:05:19.395 NVME_PMR=,, 00:05:19.395 NVME_ZNS=,, 00:05:19.395 NVME_MS=,, 00:05:19.395 NVME_FDP=,, 00:05:19.395 SPDK_VAGRANT_DISTRO=fedora39 00:05:19.395 SPDK_VAGRANT_VMCPU=10 00:05:19.395 SPDK_VAGRANT_VMRAM=12288 00:05:19.395 SPDK_VAGRANT_PROVIDER=libvirt 00:05:19.395 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:05:19.395 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:05:19.395 SPDK_OPENSTACK_NETWORK=0 00:05:19.395 VAGRANT_PACKAGE_BOX=0 00:05:19.395 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:05:19.395 FORCE_DISTRO=true 00:05:19.395 VAGRANT_BOX_VERSION= 00:05:19.395 EXTRA_VAGRANTFILES= 00:05:19.395 NIC_MODEL=e1000 00:05:19.395 00:05:19.395 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:05:19.395 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:05:21.927 Bringing machine 'default' up with 'libvirt' provider... 00:05:22.867 ==> default: Creating image (snapshot of base box volume). 00:05:23.127 ==> default: Creating domain with the following settings... 00:05:23.127 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733735759_db596dd3a2afc9addaa2 00:05:23.127 ==> default: -- Domain type: kvm 00:05:23.127 ==> default: -- Cpus: 10 00:05:23.127 ==> default: -- Feature: acpi 00:05:23.127 ==> default: -- Feature: apic 00:05:23.127 ==> default: -- Feature: pae 00:05:23.127 ==> default: -- Memory: 12288M 00:05:23.127 ==> default: -- Memory Backing: hugepages: 00:05:23.127 ==> default: -- Management MAC: 00:05:23.127 ==> default: -- Loader: 00:05:23.127 ==> default: -- Nvram: 00:05:23.127 ==> default: -- Base box: spdk/fedora39 00:05:23.127 ==> default: -- Storage pool: default 00:05:23.127 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733735759_db596dd3a2afc9addaa2.img (20G) 00:05:23.127 ==> default: -- Volume Cache: default 00:05:23.127 ==> default: -- Kernel: 00:05:23.127 ==> default: -- Initrd: 00:05:23.127 ==> default: -- Graphics Type: vnc 00:05:23.127 ==> default: -- Graphics Port: -1 00:05:23.127 ==> default: -- Graphics IP: 127.0.0.1 00:05:23.127 ==> default: -- Graphics Password: Not defined 00:05:23.127 ==> default: -- Video Type: cirrus 00:05:23.127 ==> default: -- Video VRAM: 9216 00:05:23.127 ==> default: -- Sound Type: 00:05:23.127 ==> default: -- Keymap: en-us 00:05:23.127 ==> default: -- TPM Path: 00:05:23.127 ==> default: -- INPUT: type=mouse, bus=ps2 00:05:23.127 ==> default: -- Command line args: 00:05:23.127 ==> default: -> value=-device, 00:05:23.127 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:05:23.127 ==> default: -> value=-drive, 00:05:23.127 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-0-drive0, 00:05:23.127 ==> default: -> value=-device, 00:05:23.127 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:23.127 ==> default: -> value=-device, 00:05:23.127 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:05:23.127 ==> default: -> value=-drive, 00:05:23.127 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:05:23.127 ==> default: -> value=-device, 00:05:23.127 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:23.127 ==> default: -> value=-drive, 00:05:23.127 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:05:23.127 ==> default: -> value=-device, 00:05:23.127 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:23.127 ==> default: -> value=-drive, 00:05:23.127 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:05:23.127 ==> default: -> value=-device, 00:05:23.127 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:23.387 ==> default: Creating shared folders metadata... 00:05:23.387 ==> default: Starting domain. 00:05:25.919 ==> default: Waiting for domain to get an IP address... 00:05:44.008 ==> default: Waiting for SSH to become available... 00:05:44.945 ==> default: Configuring and enabling network interfaces... 00:05:51.510 default: SSH address: 192.168.121.170:22 00:05:51.510 default: SSH username: vagrant 00:05:51.510 default: SSH auth method: private key 00:05:54.045 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:06:02.169 ==> default: Mounting SSHFS shared folder... 00:06:04.700 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:06:04.700 ==> default: Checking Mount.. 00:06:06.081 ==> default: Folder Successfully Mounted! 00:06:06.081 ==> default: Running provisioner: file... 00:06:07.457 default: ~/.gitconfig => .gitconfig 00:06:07.715 00:06:07.715 SUCCESS! 00:06:07.715 00:06:07.715 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:06:07.715 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:06:07.715 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:06:07.715 00:06:07.723 [Pipeline] } 00:06:07.734 [Pipeline] // stage 00:06:07.741 [Pipeline] dir 00:06:07.741 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:06:07.742 [Pipeline] { 00:06:07.753 [Pipeline] catchError 00:06:07.755 [Pipeline] { 00:06:07.765 [Pipeline] sh 00:06:08.045 + vagrant ssh-config --host vagrant 00:06:08.045 + sed -ne /^Host/,$p 00:06:08.045 + tee ssh_conf 00:06:11.331 Host vagrant 00:06:11.331 HostName 192.168.121.170 00:06:11.331 User vagrant 00:06:11.331 Port 22 00:06:11.331 UserKnownHostsFile /dev/null 00:06:11.331 StrictHostKeyChecking no 00:06:11.331 PasswordAuthentication no 00:06:11.331 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:06:11.331 IdentitiesOnly yes 00:06:11.331 LogLevel FATAL 00:06:11.331 ForwardAgent yes 00:06:11.331 ForwardX11 yes 00:06:11.331 00:06:11.343 [Pipeline] withEnv 00:06:11.345 [Pipeline] { 00:06:11.358 [Pipeline] sh 00:06:11.638 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:06:11.638 source /etc/os-release 00:06:11.638 [[ -e /image.version ]] && img=$(< /image.version) 00:06:11.638 # Minimal, systemd-like check. 00:06:11.638 if [[ -e /.dockerenv ]]; then 00:06:11.638 # Clear garbage from the node's name: 00:06:11.638 # agt-er_autotest_547-896 -> autotest_547-896 00:06:11.638 # $HOSTNAME is the actual container id 00:06:11.638 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:06:11.638 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:06:11.638 # We can assume this is a mount from a host where container is running, 00:06:11.638 # so fetch its hostname to easily identify the target swarm worker. 00:06:11.638 container="$(< /etc/hostname) ($agent)" 00:06:11.638 else 00:06:11.638 # Fallback 00:06:11.638 container=$agent 00:06:11.638 fi 00:06:11.638 fi 00:06:11.638 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:06:11.638 00:06:11.907 [Pipeline] } 00:06:11.922 [Pipeline] // withEnv 00:06:11.930 [Pipeline] setCustomBuildProperty 00:06:11.945 [Pipeline] stage 00:06:11.947 [Pipeline] { (Tests) 00:06:11.964 [Pipeline] sh 00:06:12.243 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:06:12.566 [Pipeline] sh 00:06:12.887 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:06:13.160 [Pipeline] timeout 00:06:13.161 Timeout set to expire in 1 hr 0 min 00:06:13.163 [Pipeline] { 00:06:13.177 [Pipeline] sh 00:06:13.460 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:06:14.029 HEAD is now at 496bfd677 env: match legacy mem mode config with DPDK 00:06:14.044 [Pipeline] sh 00:06:14.328 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:06:14.604 [Pipeline] sh 00:06:14.894 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:06:15.172 [Pipeline] sh 00:06:15.455 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:06:15.714 ++ readlink -f spdk_repo 00:06:15.714 + DIR_ROOT=/home/vagrant/spdk_repo 00:06:15.714 + [[ -n /home/vagrant/spdk_repo ]] 00:06:15.714 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:06:15.714 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:06:15.714 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:06:15.714 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:06:15.714 + [[ -d /home/vagrant/spdk_repo/output ]] 00:06:15.714 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:06:15.714 + cd /home/vagrant/spdk_repo 00:06:15.714 + source /etc/os-release 00:06:15.714 ++ NAME='Fedora Linux' 00:06:15.714 ++ VERSION='39 (Cloud Edition)' 00:06:15.714 ++ ID=fedora 00:06:15.714 ++ VERSION_ID=39 00:06:15.714 ++ VERSION_CODENAME= 00:06:15.714 ++ PLATFORM_ID=platform:f39 00:06:15.714 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:06:15.714 ++ ANSI_COLOR='0;38;2;60;110;180' 00:06:15.714 ++ LOGO=fedora-logo-icon 00:06:15.714 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:06:15.714 ++ HOME_URL=https://fedoraproject.org/ 00:06:15.714 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:06:15.714 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:06:15.714 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:06:15.714 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:06:15.714 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:06:15.714 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:06:15.714 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:06:15.714 ++ SUPPORT_END=2024-11-12 00:06:15.714 ++ VARIANT='Cloud Edition' 00:06:15.714 ++ VARIANT_ID=cloud 00:06:15.714 + uname -a 00:06:15.714 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:06:15.714 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:16.311 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:16.311 Hugepages 00:06:16.311 node hugesize free / total 00:06:16.311 node0 1048576kB 0 / 0 00:06:16.311 node0 2048kB 0 / 0 00:06:16.311 00:06:16.311 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:16.311 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:16.311 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:16.311 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:06:16.311 + rm -f /tmp/spdk-ld-path 00:06:16.311 + source autorun-spdk.conf 00:06:16.311 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:16.311 ++ SPDK_TEST_NVMF=1 00:06:16.311 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:06:16.311 ++ SPDK_TEST_URING=1 00:06:16.311 ++ SPDK_TEST_USDT=1 00:06:16.311 ++ SPDK_RUN_UBSAN=1 00:06:16.311 ++ NET_TYPE=virt 00:06:16.311 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:06:16.311 ++ RUN_NIGHTLY=0 00:06:16.311 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:06:16.311 + [[ -n '' ]] 00:06:16.311 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:06:16.311 + for M in /var/spdk/build-*-manifest.txt 00:06:16.311 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:06:16.311 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:16.311 + for M in /var/spdk/build-*-manifest.txt 00:06:16.311 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:06:16.311 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:16.311 + for M in /var/spdk/build-*-manifest.txt 00:06:16.311 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:06:16.311 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:16.594 ++ uname 00:06:16.594 + [[ Linux == \L\i\n\u\x ]] 00:06:16.594 + sudo dmesg -T 00:06:16.594 + sudo dmesg --clear 00:06:16.594 + dmesg_pid=5217 00:06:16.594 + [[ Fedora Linux == FreeBSD ]] 00:06:16.594 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:16.594 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:16.594 + sudo dmesg -Tw 00:06:16.594 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:06:16.594 + [[ -x /usr/src/fio-static/fio ]] 00:06:16.594 + export FIO_BIN=/usr/src/fio-static/fio 00:06:16.594 + FIO_BIN=/usr/src/fio-static/fio 00:06:16.594 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:06:16.594 + [[ ! -v VFIO_QEMU_BIN ]] 00:06:16.594 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:06:16.594 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:16.594 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:16.594 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:06:16.594 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:16.594 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:16.594 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:16.594 09:16:54 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:06:16.594 09:16:54 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:16.594 09:16:54 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:16.594 09:16:54 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:06:16.594 09:16:54 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:06:16.594 09:16:54 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:06:16.594 09:16:54 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_USDT=1 00:06:16.594 09:16:54 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:06:16.594 09:16:54 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:06:16.594 09:16:54 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:06:16.594 09:16:54 -- spdk_repo/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:06:16.594 09:16:54 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:06:16.595 09:16:54 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:16.855 09:16:54 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:06:16.855 09:16:54 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:16.855 09:16:54 -- scripts/common.sh@15 -- $ shopt -s extglob 00:06:16.855 09:16:54 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:06:16.855 09:16:54 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:16.855 09:16:54 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:16.855 09:16:54 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.855 09:16:54 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.855 09:16:54 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.855 09:16:54 -- paths/export.sh@5 -- $ export PATH 00:06:16.855 09:16:54 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.855 09:16:54 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:06:16.855 09:16:54 -- common/autobuild_common.sh@493 -- $ date +%s 00:06:16.855 09:16:54 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733735814.XXXXXX 00:06:16.855 09:16:54 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733735814.TapPqk 00:06:16.855 09:16:54 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:06:16.855 09:16:54 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:06:16.855 09:16:54 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:06:16.855 09:16:54 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:06:16.855 09:16:54 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:06:16.855 09:16:54 -- common/autobuild_common.sh@509 -- $ get_config_params 00:06:16.855 09:16:54 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:06:16.855 09:16:54 -- common/autotest_common.sh@10 -- $ set +x 00:06:16.855 09:16:54 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:06:16.855 09:16:54 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:06:16.855 09:16:54 -- pm/common@17 -- $ local monitor 00:06:16.855 09:16:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:16.855 09:16:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:16.855 09:16:54 -- pm/common@21 -- $ date +%s 00:06:16.855 09:16:54 -- pm/common@25 -- $ sleep 1 00:06:16.855 09:16:54 -- pm/common@21 -- $ date +%s 00:06:16.855 09:16:54 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733735814 00:06:16.855 09:16:54 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733735814 00:06:16.855 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733735814_collect-cpu-load.pm.log 00:06:16.855 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733735814_collect-vmstat.pm.log 00:06:17.798 09:16:55 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:06:17.798 09:16:55 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:06:17.798 09:16:55 -- spdk/autobuild.sh@12 -- $ umask 022 00:06:17.798 09:16:55 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:06:17.798 09:16:55 -- spdk/autobuild.sh@16 -- $ date -u 00:06:17.798 Mon Dec 9 09:16:55 AM UTC 2024 00:06:17.798 09:16:55 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:06:17.798 v25.01-pre-312-g496bfd677 00:06:17.798 09:16:55 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:06:17.798 09:16:55 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:06:17.798 09:16:55 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:06:17.798 09:16:55 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:06:17.798 09:16:55 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:06:17.798 09:16:55 -- common/autotest_common.sh@10 -- $ set +x 00:06:17.798 ************************************ 00:06:17.798 START TEST ubsan 00:06:17.798 ************************************ 00:06:17.798 using ubsan 00:06:17.798 09:16:55 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:06:17.798 00:06:17.798 real 0m0.000s 00:06:17.798 user 0m0.000s 00:06:17.798 sys 0m0.000s 00:06:17.798 ************************************ 00:06:17.798 END TEST ubsan 00:06:17.798 ************************************ 00:06:17.798 09:16:55 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:06:17.798 09:16:55 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:06:17.798 09:16:55 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:06:17.798 09:16:55 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:06:17.798 09:16:55 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:06:17.798 09:16:55 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:06:17.798 09:16:55 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:06:17.798 09:16:55 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:06:17.798 09:16:55 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:06:17.798 09:16:55 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:06:17.798 09:16:55 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:06:18.056 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:18.056 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:06:18.625 Using 'verbs' RDMA provider 00:06:34.500 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:06:49.370 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:06:49.935 Creating mk/config.mk...done. 00:06:49.935 Creating mk/cc.flags.mk...done. 00:06:49.935 Type 'make' to build. 00:06:49.935 09:17:27 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:06:49.935 09:17:27 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:06:49.935 09:17:27 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:06:49.935 09:17:27 -- common/autotest_common.sh@10 -- $ set +x 00:06:49.935 ************************************ 00:06:49.935 START TEST make 00:06:49.935 ************************************ 00:06:49.935 09:17:27 make -- common/autotest_common.sh@1129 -- $ make -j10 00:06:50.501 make[1]: Nothing to be done for 'all'. 00:07:02.700 The Meson build system 00:07:02.700 Version: 1.5.0 00:07:02.700 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:07:02.700 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:07:02.700 Build type: native build 00:07:02.700 Program cat found: YES (/usr/bin/cat) 00:07:02.700 Project name: DPDK 00:07:02.700 Project version: 24.03.0 00:07:02.700 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:07:02.700 C linker for the host machine: cc ld.bfd 2.40-14 00:07:02.700 Host machine cpu family: x86_64 00:07:02.700 Host machine cpu: x86_64 00:07:02.700 Message: ## Building in Developer Mode ## 00:07:02.700 Program pkg-config found: YES (/usr/bin/pkg-config) 00:07:02.700 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:07:02.700 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:07:02.700 Program python3 found: YES (/usr/bin/python3) 00:07:02.700 Program cat found: YES (/usr/bin/cat) 00:07:02.700 Compiler for C supports arguments -march=native: YES 00:07:02.700 Checking for size of "void *" : 8 00:07:02.700 Checking for size of "void *" : 8 (cached) 00:07:02.700 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:07:02.700 Library m found: YES 00:07:02.700 Library numa found: YES 00:07:02.700 Has header "numaif.h" : YES 00:07:02.700 Library fdt found: NO 00:07:02.700 Library execinfo found: NO 00:07:02.700 Has header "execinfo.h" : YES 00:07:02.700 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:07:02.700 Run-time dependency libarchive found: NO (tried pkgconfig) 00:07:02.700 Run-time dependency libbsd found: NO (tried pkgconfig) 00:07:02.701 Run-time dependency jansson found: NO (tried pkgconfig) 00:07:02.701 Run-time dependency openssl found: YES 3.1.1 00:07:02.701 Run-time dependency libpcap found: YES 1.10.4 00:07:02.701 Has header "pcap.h" with dependency libpcap: YES 00:07:02.701 Compiler for C supports arguments -Wcast-qual: YES 00:07:02.701 Compiler for C supports arguments -Wdeprecated: YES 00:07:02.701 Compiler for C supports arguments -Wformat: YES 00:07:02.701 Compiler for C supports arguments -Wformat-nonliteral: NO 00:07:02.701 Compiler for C supports arguments -Wformat-security: NO 00:07:02.701 Compiler for C supports arguments -Wmissing-declarations: YES 00:07:02.701 Compiler for C supports arguments -Wmissing-prototypes: YES 00:07:02.701 Compiler for C supports arguments -Wnested-externs: YES 00:07:02.701 Compiler for C supports arguments -Wold-style-definition: YES 00:07:02.701 Compiler for C supports arguments -Wpointer-arith: YES 00:07:02.701 Compiler for C supports arguments -Wsign-compare: YES 00:07:02.701 Compiler for C supports arguments -Wstrict-prototypes: YES 00:07:02.701 Compiler for C supports arguments -Wundef: YES 00:07:02.701 Compiler for C supports arguments -Wwrite-strings: YES 00:07:02.701 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:07:02.701 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:07:02.701 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:07:02.701 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:07:02.701 Program objdump found: YES (/usr/bin/objdump) 00:07:02.701 Compiler for C supports arguments -mavx512f: YES 00:07:02.701 Checking if "AVX512 checking" compiles: YES 00:07:02.701 Fetching value of define "__SSE4_2__" : 1 00:07:02.701 Fetching value of define "__AES__" : 1 00:07:02.701 Fetching value of define "__AVX__" : 1 00:07:02.701 Fetching value of define "__AVX2__" : 1 00:07:02.701 Fetching value of define "__AVX512BW__" : 1 00:07:02.701 Fetching value of define "__AVX512CD__" : 1 00:07:02.701 Fetching value of define "__AVX512DQ__" : 1 00:07:02.701 Fetching value of define "__AVX512F__" : 1 00:07:02.701 Fetching value of define "__AVX512VL__" : 1 00:07:02.701 Fetching value of define "__PCLMUL__" : 1 00:07:02.701 Fetching value of define "__RDRND__" : 1 00:07:02.701 Fetching value of define "__RDSEED__" : 1 00:07:02.701 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:07:02.701 Fetching value of define "__znver1__" : (undefined) 00:07:02.701 Fetching value of define "__znver2__" : (undefined) 00:07:02.701 Fetching value of define "__znver3__" : (undefined) 00:07:02.701 Fetching value of define "__znver4__" : (undefined) 00:07:02.701 Compiler for C supports arguments -Wno-format-truncation: YES 00:07:02.701 Message: lib/log: Defining dependency "log" 00:07:02.701 Message: lib/kvargs: Defining dependency "kvargs" 00:07:02.701 Message: lib/telemetry: Defining dependency "telemetry" 00:07:02.701 Checking for function "getentropy" : NO 00:07:02.701 Message: lib/eal: Defining dependency "eal" 00:07:02.701 Message: lib/ring: Defining dependency "ring" 00:07:02.701 Message: lib/rcu: Defining dependency "rcu" 00:07:02.701 Message: lib/mempool: Defining dependency "mempool" 00:07:02.701 Message: lib/mbuf: Defining dependency "mbuf" 00:07:02.701 Fetching value of define "__PCLMUL__" : 1 (cached) 00:07:02.701 Fetching value of define "__AVX512F__" : 1 (cached) 00:07:02.701 Fetching value of define "__AVX512BW__" : 1 (cached) 00:07:02.701 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:07:02.701 Fetching value of define "__AVX512VL__" : 1 (cached) 00:07:02.701 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:07:02.701 Compiler for C supports arguments -mpclmul: YES 00:07:02.701 Compiler for C supports arguments -maes: YES 00:07:02.701 Compiler for C supports arguments -mavx512f: YES (cached) 00:07:02.701 Compiler for C supports arguments -mavx512bw: YES 00:07:02.701 Compiler for C supports arguments -mavx512dq: YES 00:07:02.701 Compiler for C supports arguments -mavx512vl: YES 00:07:02.701 Compiler for C supports arguments -mvpclmulqdq: YES 00:07:02.701 Compiler for C supports arguments -mavx2: YES 00:07:02.701 Compiler for C supports arguments -mavx: YES 00:07:02.701 Message: lib/net: Defining dependency "net" 00:07:02.701 Message: lib/meter: Defining dependency "meter" 00:07:02.701 Message: lib/ethdev: Defining dependency "ethdev" 00:07:02.701 Message: lib/pci: Defining dependency "pci" 00:07:02.701 Message: lib/cmdline: Defining dependency "cmdline" 00:07:02.701 Message: lib/hash: Defining dependency "hash" 00:07:02.701 Message: lib/timer: Defining dependency "timer" 00:07:02.701 Message: lib/compressdev: Defining dependency "compressdev" 00:07:02.701 Message: lib/cryptodev: Defining dependency "cryptodev" 00:07:02.701 Message: lib/dmadev: Defining dependency "dmadev" 00:07:02.701 Compiler for C supports arguments -Wno-cast-qual: YES 00:07:02.701 Message: lib/power: Defining dependency "power" 00:07:02.701 Message: lib/reorder: Defining dependency "reorder" 00:07:02.701 Message: lib/security: Defining dependency "security" 00:07:02.701 Has header "linux/userfaultfd.h" : YES 00:07:02.701 Has header "linux/vduse.h" : YES 00:07:02.701 Message: lib/vhost: Defining dependency "vhost" 00:07:02.701 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:07:02.701 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:07:02.701 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:07:02.701 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:07:02.701 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:07:02.701 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:07:02.701 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:07:02.701 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:07:02.701 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:07:02.701 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:07:02.701 Program doxygen found: YES (/usr/local/bin/doxygen) 00:07:02.701 Configuring doxy-api-html.conf using configuration 00:07:02.701 Configuring doxy-api-man.conf using configuration 00:07:02.701 Program mandb found: YES (/usr/bin/mandb) 00:07:02.701 Program sphinx-build found: NO 00:07:02.701 Configuring rte_build_config.h using configuration 00:07:02.701 Message: 00:07:02.701 ================= 00:07:02.701 Applications Enabled 00:07:02.701 ================= 00:07:02.701 00:07:02.701 apps: 00:07:02.701 00:07:02.701 00:07:02.701 Message: 00:07:02.701 ================= 00:07:02.701 Libraries Enabled 00:07:02.701 ================= 00:07:02.701 00:07:02.701 libs: 00:07:02.701 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:07:02.701 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:07:02.701 cryptodev, dmadev, power, reorder, security, vhost, 00:07:02.701 00:07:02.701 Message: 00:07:02.701 =============== 00:07:02.701 Drivers Enabled 00:07:02.701 =============== 00:07:02.701 00:07:02.701 common: 00:07:02.701 00:07:02.701 bus: 00:07:02.701 pci, vdev, 00:07:02.701 mempool: 00:07:02.701 ring, 00:07:02.701 dma: 00:07:02.701 00:07:02.701 net: 00:07:02.701 00:07:02.701 crypto: 00:07:02.701 00:07:02.701 compress: 00:07:02.701 00:07:02.701 vdpa: 00:07:02.701 00:07:02.701 00:07:02.701 Message: 00:07:02.701 ================= 00:07:02.701 Content Skipped 00:07:02.701 ================= 00:07:02.701 00:07:02.701 apps: 00:07:02.701 dumpcap: explicitly disabled via build config 00:07:02.701 graph: explicitly disabled via build config 00:07:02.701 pdump: explicitly disabled via build config 00:07:02.701 proc-info: explicitly disabled via build config 00:07:02.701 test-acl: explicitly disabled via build config 00:07:02.701 test-bbdev: explicitly disabled via build config 00:07:02.701 test-cmdline: explicitly disabled via build config 00:07:02.701 test-compress-perf: explicitly disabled via build config 00:07:02.701 test-crypto-perf: explicitly disabled via build config 00:07:02.701 test-dma-perf: explicitly disabled via build config 00:07:02.701 test-eventdev: explicitly disabled via build config 00:07:02.701 test-fib: explicitly disabled via build config 00:07:02.701 test-flow-perf: explicitly disabled via build config 00:07:02.701 test-gpudev: explicitly disabled via build config 00:07:02.701 test-mldev: explicitly disabled via build config 00:07:02.701 test-pipeline: explicitly disabled via build config 00:07:02.701 test-pmd: explicitly disabled via build config 00:07:02.701 test-regex: explicitly disabled via build config 00:07:02.701 test-sad: explicitly disabled via build config 00:07:02.701 test-security-perf: explicitly disabled via build config 00:07:02.701 00:07:02.701 libs: 00:07:02.701 argparse: explicitly disabled via build config 00:07:02.701 metrics: explicitly disabled via build config 00:07:02.701 acl: explicitly disabled via build config 00:07:02.701 bbdev: explicitly disabled via build config 00:07:02.701 bitratestats: explicitly disabled via build config 00:07:02.701 bpf: explicitly disabled via build config 00:07:02.701 cfgfile: explicitly disabled via build config 00:07:02.701 distributor: explicitly disabled via build config 00:07:02.701 efd: explicitly disabled via build config 00:07:02.701 eventdev: explicitly disabled via build config 00:07:02.701 dispatcher: explicitly disabled via build config 00:07:02.701 gpudev: explicitly disabled via build config 00:07:02.701 gro: explicitly disabled via build config 00:07:02.701 gso: explicitly disabled via build config 00:07:02.701 ip_frag: explicitly disabled via build config 00:07:02.701 jobstats: explicitly disabled via build config 00:07:02.701 latencystats: explicitly disabled via build config 00:07:02.701 lpm: explicitly disabled via build config 00:07:02.701 member: explicitly disabled via build config 00:07:02.701 pcapng: explicitly disabled via build config 00:07:02.701 rawdev: explicitly disabled via build config 00:07:02.701 regexdev: explicitly disabled via build config 00:07:02.701 mldev: explicitly disabled via build config 00:07:02.701 rib: explicitly disabled via build config 00:07:02.701 sched: explicitly disabled via build config 00:07:02.701 stack: explicitly disabled via build config 00:07:02.701 ipsec: explicitly disabled via build config 00:07:02.702 pdcp: explicitly disabled via build config 00:07:02.702 fib: explicitly disabled via build config 00:07:02.702 port: explicitly disabled via build config 00:07:02.702 pdump: explicitly disabled via build config 00:07:02.702 table: explicitly disabled via build config 00:07:02.702 pipeline: explicitly disabled via build config 00:07:02.702 graph: explicitly disabled via build config 00:07:02.702 node: explicitly disabled via build config 00:07:02.702 00:07:02.702 drivers: 00:07:02.702 common/cpt: not in enabled drivers build config 00:07:02.702 common/dpaax: not in enabled drivers build config 00:07:02.702 common/iavf: not in enabled drivers build config 00:07:02.702 common/idpf: not in enabled drivers build config 00:07:02.702 common/ionic: not in enabled drivers build config 00:07:02.702 common/mvep: not in enabled drivers build config 00:07:02.702 common/octeontx: not in enabled drivers build config 00:07:02.702 bus/auxiliary: not in enabled drivers build config 00:07:02.702 bus/cdx: not in enabled drivers build config 00:07:02.702 bus/dpaa: not in enabled drivers build config 00:07:02.702 bus/fslmc: not in enabled drivers build config 00:07:02.702 bus/ifpga: not in enabled drivers build config 00:07:02.702 bus/platform: not in enabled drivers build config 00:07:02.702 bus/uacce: not in enabled drivers build config 00:07:02.702 bus/vmbus: not in enabled drivers build config 00:07:02.702 common/cnxk: not in enabled drivers build config 00:07:02.702 common/mlx5: not in enabled drivers build config 00:07:02.702 common/nfp: not in enabled drivers build config 00:07:02.702 common/nitrox: not in enabled drivers build config 00:07:02.702 common/qat: not in enabled drivers build config 00:07:02.702 common/sfc_efx: not in enabled drivers build config 00:07:02.702 mempool/bucket: not in enabled drivers build config 00:07:02.702 mempool/cnxk: not in enabled drivers build config 00:07:02.702 mempool/dpaa: not in enabled drivers build config 00:07:02.702 mempool/dpaa2: not in enabled drivers build config 00:07:02.702 mempool/octeontx: not in enabled drivers build config 00:07:02.702 mempool/stack: not in enabled drivers build config 00:07:02.702 dma/cnxk: not in enabled drivers build config 00:07:02.702 dma/dpaa: not in enabled drivers build config 00:07:02.702 dma/dpaa2: not in enabled drivers build config 00:07:02.702 dma/hisilicon: not in enabled drivers build config 00:07:02.702 dma/idxd: not in enabled drivers build config 00:07:02.702 dma/ioat: not in enabled drivers build config 00:07:02.702 dma/skeleton: not in enabled drivers build config 00:07:02.702 net/af_packet: not in enabled drivers build config 00:07:02.702 net/af_xdp: not in enabled drivers build config 00:07:02.702 net/ark: not in enabled drivers build config 00:07:02.702 net/atlantic: not in enabled drivers build config 00:07:02.702 net/avp: not in enabled drivers build config 00:07:02.702 net/axgbe: not in enabled drivers build config 00:07:02.702 net/bnx2x: not in enabled drivers build config 00:07:02.702 net/bnxt: not in enabled drivers build config 00:07:02.702 net/bonding: not in enabled drivers build config 00:07:02.702 net/cnxk: not in enabled drivers build config 00:07:02.702 net/cpfl: not in enabled drivers build config 00:07:02.702 net/cxgbe: not in enabled drivers build config 00:07:02.702 net/dpaa: not in enabled drivers build config 00:07:02.702 net/dpaa2: not in enabled drivers build config 00:07:02.702 net/e1000: not in enabled drivers build config 00:07:02.702 net/ena: not in enabled drivers build config 00:07:02.702 net/enetc: not in enabled drivers build config 00:07:02.702 net/enetfec: not in enabled drivers build config 00:07:02.702 net/enic: not in enabled drivers build config 00:07:02.702 net/failsafe: not in enabled drivers build config 00:07:02.702 net/fm10k: not in enabled drivers build config 00:07:02.702 net/gve: not in enabled drivers build config 00:07:02.702 net/hinic: not in enabled drivers build config 00:07:02.702 net/hns3: not in enabled drivers build config 00:07:02.702 net/i40e: not in enabled drivers build config 00:07:02.702 net/iavf: not in enabled drivers build config 00:07:02.702 net/ice: not in enabled drivers build config 00:07:02.702 net/idpf: not in enabled drivers build config 00:07:02.702 net/igc: not in enabled drivers build config 00:07:02.702 net/ionic: not in enabled drivers build config 00:07:02.702 net/ipn3ke: not in enabled drivers build config 00:07:02.702 net/ixgbe: not in enabled drivers build config 00:07:02.702 net/mana: not in enabled drivers build config 00:07:02.702 net/memif: not in enabled drivers build config 00:07:02.702 net/mlx4: not in enabled drivers build config 00:07:02.702 net/mlx5: not in enabled drivers build config 00:07:02.702 net/mvneta: not in enabled drivers build config 00:07:02.702 net/mvpp2: not in enabled drivers build config 00:07:02.702 net/netvsc: not in enabled drivers build config 00:07:02.702 net/nfb: not in enabled drivers build config 00:07:02.702 net/nfp: not in enabled drivers build config 00:07:02.702 net/ngbe: not in enabled drivers build config 00:07:02.702 net/null: not in enabled drivers build config 00:07:02.702 net/octeontx: not in enabled drivers build config 00:07:02.702 net/octeon_ep: not in enabled drivers build config 00:07:02.702 net/pcap: not in enabled drivers build config 00:07:02.702 net/pfe: not in enabled drivers build config 00:07:02.702 net/qede: not in enabled drivers build config 00:07:02.702 net/ring: not in enabled drivers build config 00:07:02.702 net/sfc: not in enabled drivers build config 00:07:02.702 net/softnic: not in enabled drivers build config 00:07:02.702 net/tap: not in enabled drivers build config 00:07:02.702 net/thunderx: not in enabled drivers build config 00:07:02.702 net/txgbe: not in enabled drivers build config 00:07:02.702 net/vdev_netvsc: not in enabled drivers build config 00:07:02.702 net/vhost: not in enabled drivers build config 00:07:02.702 net/virtio: not in enabled drivers build config 00:07:02.702 net/vmxnet3: not in enabled drivers build config 00:07:02.702 raw/*: missing internal dependency, "rawdev" 00:07:02.702 crypto/armv8: not in enabled drivers build config 00:07:02.702 crypto/bcmfs: not in enabled drivers build config 00:07:02.702 crypto/caam_jr: not in enabled drivers build config 00:07:02.702 crypto/ccp: not in enabled drivers build config 00:07:02.702 crypto/cnxk: not in enabled drivers build config 00:07:02.702 crypto/dpaa_sec: not in enabled drivers build config 00:07:02.702 crypto/dpaa2_sec: not in enabled drivers build config 00:07:02.702 crypto/ipsec_mb: not in enabled drivers build config 00:07:02.702 crypto/mlx5: not in enabled drivers build config 00:07:02.702 crypto/mvsam: not in enabled drivers build config 00:07:02.702 crypto/nitrox: not in enabled drivers build config 00:07:02.702 crypto/null: not in enabled drivers build config 00:07:02.702 crypto/octeontx: not in enabled drivers build config 00:07:02.702 crypto/openssl: not in enabled drivers build config 00:07:02.702 crypto/scheduler: not in enabled drivers build config 00:07:02.702 crypto/uadk: not in enabled drivers build config 00:07:02.702 crypto/virtio: not in enabled drivers build config 00:07:02.702 compress/isal: not in enabled drivers build config 00:07:02.702 compress/mlx5: not in enabled drivers build config 00:07:02.702 compress/nitrox: not in enabled drivers build config 00:07:02.702 compress/octeontx: not in enabled drivers build config 00:07:02.702 compress/zlib: not in enabled drivers build config 00:07:02.702 regex/*: missing internal dependency, "regexdev" 00:07:02.702 ml/*: missing internal dependency, "mldev" 00:07:02.702 vdpa/ifc: not in enabled drivers build config 00:07:02.702 vdpa/mlx5: not in enabled drivers build config 00:07:02.702 vdpa/nfp: not in enabled drivers build config 00:07:02.702 vdpa/sfc: not in enabled drivers build config 00:07:02.702 event/*: missing internal dependency, "eventdev" 00:07:02.702 baseband/*: missing internal dependency, "bbdev" 00:07:02.702 gpu/*: missing internal dependency, "gpudev" 00:07:02.702 00:07:02.702 00:07:02.702 Build targets in project: 85 00:07:02.702 00:07:02.702 DPDK 24.03.0 00:07:02.702 00:07:02.702 User defined options 00:07:02.702 buildtype : debug 00:07:02.702 default_library : shared 00:07:02.702 libdir : lib 00:07:02.702 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:07:02.702 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:07:02.702 c_link_args : 00:07:02.702 cpu_instruction_set: native 00:07:02.702 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:07:02.702 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:07:02.702 enable_docs : false 00:07:02.702 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:07:02.702 enable_kmods : false 00:07:02.702 max_lcores : 128 00:07:02.702 tests : false 00:07:02.702 00:07:02.702 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:07:03.267 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:07:03.267 [1/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:07:03.267 [2/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:07:03.267 [3/268] Linking static target lib/librte_kvargs.a 00:07:03.267 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:07:03.267 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:07:03.267 [6/268] Linking static target lib/librte_log.a 00:07:03.524 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:07:03.782 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:07:03.782 [9/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:07:03.782 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:07:03.782 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:07:03.782 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:07:03.782 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:07:03.782 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:07:04.039 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:07:04.039 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:07:04.039 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:07:04.039 [18/268] Linking static target lib/librte_telemetry.a 00:07:04.296 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:07:04.296 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:07:04.296 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:07:04.553 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:07:04.553 [23/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:07:04.553 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:07:04.553 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:07:04.553 [26/268] Linking target lib/librte_log.so.24.1 00:07:04.553 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:07:04.553 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:07:04.553 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:07:04.810 [30/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:07:04.810 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:07:04.810 [32/268] Linking target lib/librte_kvargs.so.24.1 00:07:05.068 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:07:05.068 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:07:05.068 [35/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:07:05.068 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:07:05.068 [37/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:07:05.068 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:07:05.068 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:07:05.068 [40/268] Linking target lib/librte_telemetry.so.24.1 00:07:05.068 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:07:05.068 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:07:05.325 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:07:05.325 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:07:05.325 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:07:05.325 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:07:05.325 [47/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:07:05.583 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:07:05.583 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:07:05.583 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:07:05.840 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:07:05.840 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:07:05.840 [53/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:07:05.840 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:07:05.840 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:07:05.840 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:07:05.840 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:07:06.097 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:07:06.097 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:07:06.097 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:07:06.097 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:07:06.355 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:07:06.355 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:07:06.355 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:07:06.355 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:07:06.355 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:07:06.612 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:07:06.612 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:07:06.612 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:07:06.612 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:07:06.612 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:07:06.870 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:07:06.870 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:07:06.870 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:07:06.870 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:07:06.870 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:07:06.870 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:07:06.870 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:07:07.128 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:07:07.128 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:07:07.128 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:07:07.388 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:07:07.388 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:07:07.388 [84/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:07:07.388 [85/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:07:07.388 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:07:07.388 [87/268] Linking static target lib/librte_rcu.a 00:07:07.388 [88/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:07:07.388 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:07:07.388 [90/268] Linking static target lib/librte_eal.a 00:07:07.648 [91/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:07:07.648 [92/268] Linking static target lib/librte_ring.a 00:07:07.648 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:07:07.648 [94/268] Linking static target lib/librte_mempool.a 00:07:07.648 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:07:07.906 [96/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:07:07.906 [97/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:07:07.906 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:07:07.906 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:07:07.906 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:07:07.906 [101/268] Linking static target lib/librte_mbuf.a 00:07:07.906 [102/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:07:08.165 [103/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:07:08.165 [104/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:07:08.165 [105/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:07:08.424 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:07:08.424 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:07:08.424 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:07:08.424 [109/268] Linking static target lib/librte_net.a 00:07:08.424 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:07:08.424 [111/268] Linking static target lib/librte_meter.a 00:07:08.684 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:07:08.684 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:07:08.684 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:07:08.942 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:07:08.942 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:07:08.942 [117/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:07:08.942 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:07:08.942 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:07:09.201 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:07:09.201 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:07:09.460 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:07:09.460 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:07:09.460 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:07:09.718 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:07:09.718 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:07:09.718 [127/268] Linking static target lib/librte_pci.a 00:07:09.718 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:07:09.718 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:07:09.718 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:07:09.718 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:07:09.979 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:07:09.979 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:07:09.979 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:07:09.979 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:07:09.979 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:07:09.979 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:07:09.979 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:07:09.979 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:07:09.979 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:07:09.979 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:07:09.979 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:07:10.248 [143/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:10.248 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:07:10.248 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:07:10.248 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:07:10.248 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:07:10.248 [148/268] Linking static target lib/librte_ethdev.a 00:07:10.248 [149/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:07:10.507 [150/268] Linking static target lib/librte_cmdline.a 00:07:10.507 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:07:10.507 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:07:10.507 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:07:10.767 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:07:10.767 [155/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:07:10.767 [156/268] Linking static target lib/librte_timer.a 00:07:10.767 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:07:10.767 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:07:10.767 [159/268] Linking static target lib/librte_hash.a 00:07:11.026 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:07:11.026 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:07:11.026 [162/268] Linking static target lib/librte_compressdev.a 00:07:11.026 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:07:11.026 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:07:11.285 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:07:11.285 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:07:11.285 [167/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:07:11.285 [168/268] Linking static target lib/librte_dmadev.a 00:07:11.543 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:07:11.543 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:07:11.543 [171/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:07:11.543 [172/268] Linking static target lib/librte_cryptodev.a 00:07:11.543 [173/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:07:11.801 [174/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:07:11.801 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:07:12.059 [176/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:12.059 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:07:12.059 [178/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:07:12.059 [179/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:07:12.059 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:07:12.317 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:12.317 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:07:12.317 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:07:12.317 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:07:12.576 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:07:12.576 [186/268] Linking static target lib/librte_power.a 00:07:12.576 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:07:12.576 [188/268] Linking static target lib/librte_reorder.a 00:07:12.834 [189/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:07:12.834 [190/268] Linking static target lib/librte_security.a 00:07:12.834 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:07:12.835 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:07:12.835 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:07:13.092 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:07:13.092 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:07:13.660 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:07:13.660 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:07:13.660 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:07:13.660 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:07:13.660 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:07:13.918 [201/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:07:13.918 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:07:13.918 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:07:14.176 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:07:14.176 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:07:14.176 [206/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:14.176 [207/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:07:14.176 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:07:14.434 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:07:14.434 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:07:14.434 [211/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:07:14.434 [212/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:07:14.434 [213/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:07:14.434 [214/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:07:14.694 [215/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:07:14.694 [216/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:07:14.694 [217/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:14.694 [218/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:14.694 [219/268] Linking static target drivers/librte_bus_vdev.a 00:07:14.694 [220/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:14.694 [221/268] Linking static target drivers/librte_bus_pci.a 00:07:14.694 [222/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:14.694 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:07:14.694 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:14.694 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:14.694 [226/268] Linking static target drivers/librte_mempool_ring.a 00:07:14.952 [227/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:15.210 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:15.468 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:07:15.468 [230/268] Linking static target lib/librte_vhost.a 00:07:18.025 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:07:19.953 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:19.953 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:07:20.212 [234/268] Linking target lib/librte_eal.so.24.1 00:07:20.212 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:07:20.471 [236/268] Linking target lib/librte_meter.so.24.1 00:07:20.471 [237/268] Linking target lib/librte_pci.so.24.1 00:07:20.471 [238/268] Linking target lib/librte_timer.so.24.1 00:07:20.471 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:07:20.471 [240/268] Linking target lib/librte_ring.so.24.1 00:07:20.471 [241/268] Linking target lib/librte_dmadev.so.24.1 00:07:20.471 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:07:20.471 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:07:20.471 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:07:20.471 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:07:20.471 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:07:20.471 [247/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:07:20.471 [248/268] Linking target lib/librte_rcu.so.24.1 00:07:20.730 [249/268] Linking target lib/librte_mempool.so.24.1 00:07:20.730 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:07:20.730 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:07:20.730 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:07:20.730 [253/268] Linking target lib/librte_mbuf.so.24.1 00:07:20.987 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:07:20.987 [255/268] Linking target lib/librte_compressdev.so.24.1 00:07:20.987 [256/268] Linking target lib/librte_reorder.so.24.1 00:07:20.987 [257/268] Linking target lib/librte_net.so.24.1 00:07:20.987 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:07:21.245 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:07:21.245 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:07:21.245 [261/268] Linking target lib/librte_cmdline.so.24.1 00:07:21.245 [262/268] Linking target lib/librte_security.so.24.1 00:07:21.245 [263/268] Linking target lib/librte_hash.so.24.1 00:07:21.245 [264/268] Linking target lib/librte_ethdev.so.24.1 00:07:21.245 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:07:21.245 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:07:21.504 [267/268] Linking target lib/librte_power.so.24.1 00:07:21.504 [268/268] Linking target lib/librte_vhost.so.24.1 00:07:21.504 INFO: autodetecting backend as ninja 00:07:21.504 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:07:43.443 CC lib/ut/ut.o 00:07:43.443 CC lib/ut_mock/mock.o 00:07:43.443 CC lib/log/log.o 00:07:43.443 CC lib/log/log_flags.o 00:07:43.443 CC lib/log/log_deprecated.o 00:07:43.443 LIB libspdk_ut.a 00:07:43.443 LIB libspdk_ut_mock.a 00:07:43.443 SO libspdk_ut.so.2.0 00:07:43.443 SO libspdk_ut_mock.so.6.0 00:07:43.443 LIB libspdk_log.a 00:07:43.443 SYMLINK libspdk_ut.so 00:07:43.443 SYMLINK libspdk_ut_mock.so 00:07:43.443 SO libspdk_log.so.7.1 00:07:43.443 SYMLINK libspdk_log.so 00:07:43.443 CC lib/util/base64.o 00:07:43.443 CC lib/util/cpuset.o 00:07:43.443 CC lib/util/bit_array.o 00:07:43.443 CC lib/util/crc32.o 00:07:43.443 CC lib/util/crc16.o 00:07:43.443 CC lib/util/crc32c.o 00:07:43.443 CC lib/dma/dma.o 00:07:43.443 CC lib/ioat/ioat.o 00:07:43.443 CXX lib/trace_parser/trace.o 00:07:43.443 CC lib/vfio_user/host/vfio_user_pci.o 00:07:43.443 CC lib/util/crc32_ieee.o 00:07:43.443 CC lib/util/crc64.o 00:07:43.443 CC lib/util/dif.o 00:07:43.443 CC lib/util/fd.o 00:07:43.443 LIB libspdk_dma.a 00:07:43.443 CC lib/util/fd_group.o 00:07:43.443 CC lib/util/file.o 00:07:43.443 SO libspdk_dma.so.5.0 00:07:43.443 CC lib/util/hexlify.o 00:07:43.443 LIB libspdk_ioat.a 00:07:43.443 CC lib/util/iov.o 00:07:43.443 SYMLINK libspdk_dma.so 00:07:43.443 CC lib/util/math.o 00:07:43.443 SO libspdk_ioat.so.7.0 00:07:43.443 CC lib/util/net.o 00:07:43.443 CC lib/util/pipe.o 00:07:43.443 SYMLINK libspdk_ioat.so 00:07:43.443 CC lib/util/strerror_tls.o 00:07:43.443 CC lib/vfio_user/host/vfio_user.o 00:07:43.443 CC lib/util/string.o 00:07:43.443 CC lib/util/uuid.o 00:07:43.443 CC lib/util/xor.o 00:07:43.443 CC lib/util/zipf.o 00:07:43.443 CC lib/util/md5.o 00:07:43.443 LIB libspdk_vfio_user.a 00:07:43.443 SO libspdk_vfio_user.so.5.0 00:07:43.443 SYMLINK libspdk_vfio_user.so 00:07:43.443 LIB libspdk_util.a 00:07:43.443 SO libspdk_util.so.10.1 00:07:43.443 LIB libspdk_trace_parser.a 00:07:43.443 SO libspdk_trace_parser.so.6.0 00:07:43.443 SYMLINK libspdk_util.so 00:07:43.443 SYMLINK libspdk_trace_parser.so 00:07:43.443 CC lib/env_dpdk/env.o 00:07:43.443 CC lib/env_dpdk/memory.o 00:07:43.443 CC lib/env_dpdk/pci.o 00:07:43.443 CC lib/env_dpdk/threads.o 00:07:43.443 CC lib/env_dpdk/init.o 00:07:43.443 CC lib/idxd/idxd.o 00:07:43.443 CC lib/conf/conf.o 00:07:43.443 CC lib/vmd/vmd.o 00:07:43.443 CC lib/json/json_parse.o 00:07:43.443 CC lib/rdma_utils/rdma_utils.o 00:07:43.443 CC lib/env_dpdk/pci_ioat.o 00:07:43.443 LIB libspdk_conf.a 00:07:43.443 SO libspdk_conf.so.6.0 00:07:43.443 CC lib/json/json_util.o 00:07:43.443 LIB libspdk_rdma_utils.a 00:07:43.443 SYMLINK libspdk_conf.so 00:07:43.443 CC lib/env_dpdk/pci_virtio.o 00:07:43.443 CC lib/json/json_write.o 00:07:43.443 SO libspdk_rdma_utils.so.1.0 00:07:43.443 CC lib/env_dpdk/pci_vmd.o 00:07:43.443 SYMLINK libspdk_rdma_utils.so 00:07:43.443 CC lib/vmd/led.o 00:07:43.443 CC lib/env_dpdk/pci_idxd.o 00:07:43.443 CC lib/env_dpdk/pci_event.o 00:07:43.443 CC lib/env_dpdk/sigbus_handler.o 00:07:43.443 CC lib/env_dpdk/pci_dpdk.o 00:07:43.443 CC lib/env_dpdk/pci_dpdk_2207.o 00:07:43.443 CC lib/idxd/idxd_user.o 00:07:43.443 CC lib/idxd/idxd_kernel.o 00:07:43.443 LIB libspdk_json.a 00:07:43.443 CC lib/env_dpdk/pci_dpdk_2211.o 00:07:43.443 LIB libspdk_vmd.a 00:07:43.443 SO libspdk_json.so.6.0 00:07:43.443 SO libspdk_vmd.so.6.0 00:07:43.443 SYMLINK libspdk_json.so 00:07:43.443 SYMLINK libspdk_vmd.so 00:07:43.443 LIB libspdk_idxd.a 00:07:43.702 SO libspdk_idxd.so.12.1 00:07:43.702 SYMLINK libspdk_idxd.so 00:07:43.702 CC lib/rdma_provider/common.o 00:07:43.702 CC lib/rdma_provider/rdma_provider_verbs.o 00:07:43.702 CC lib/jsonrpc/jsonrpc_server.o 00:07:43.702 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:07:43.702 CC lib/jsonrpc/jsonrpc_client.o 00:07:43.702 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:07:43.961 LIB libspdk_rdma_provider.a 00:07:43.961 SO libspdk_rdma_provider.so.7.0 00:07:43.961 SYMLINK libspdk_rdma_provider.so 00:07:43.961 LIB libspdk_env_dpdk.a 00:07:43.961 LIB libspdk_jsonrpc.a 00:07:44.220 SO libspdk_env_dpdk.so.15.1 00:07:44.220 SO libspdk_jsonrpc.so.6.0 00:07:44.220 SYMLINK libspdk_jsonrpc.so 00:07:44.220 SYMLINK libspdk_env_dpdk.so 00:07:44.788 CC lib/rpc/rpc.o 00:07:44.788 LIB libspdk_rpc.a 00:07:45.047 SO libspdk_rpc.so.6.0 00:07:45.047 SYMLINK libspdk_rpc.so 00:07:45.306 CC lib/notify/notify.o 00:07:45.306 CC lib/notify/notify_rpc.o 00:07:45.306 CC lib/trace/trace.o 00:07:45.306 CC lib/trace/trace_flags.o 00:07:45.306 CC lib/trace/trace_rpc.o 00:07:45.306 CC lib/keyring/keyring_rpc.o 00:07:45.306 CC lib/keyring/keyring.o 00:07:45.565 LIB libspdk_notify.a 00:07:45.565 SO libspdk_notify.so.6.0 00:07:45.565 LIB libspdk_keyring.a 00:07:45.565 LIB libspdk_trace.a 00:07:45.565 SYMLINK libspdk_notify.so 00:07:45.825 SO libspdk_keyring.so.2.0 00:07:45.825 SO libspdk_trace.so.11.0 00:07:45.825 SYMLINK libspdk_keyring.so 00:07:45.825 SYMLINK libspdk_trace.so 00:07:46.390 CC lib/sock/sock.o 00:07:46.390 CC lib/sock/sock_rpc.o 00:07:46.390 CC lib/thread/thread.o 00:07:46.390 CC lib/thread/iobuf.o 00:07:46.658 LIB libspdk_sock.a 00:07:46.658 SO libspdk_sock.so.10.0 00:07:46.658 SYMLINK libspdk_sock.so 00:07:47.225 CC lib/nvme/nvme_ctrlr_cmd.o 00:07:47.225 CC lib/nvme/nvme_ns_cmd.o 00:07:47.225 CC lib/nvme/nvme_ctrlr.o 00:07:47.225 CC lib/nvme/nvme_fabric.o 00:07:47.225 CC lib/nvme/nvme_ns.o 00:07:47.225 CC lib/nvme/nvme_pcie_common.o 00:07:47.225 CC lib/nvme/nvme_pcie.o 00:07:47.225 CC lib/nvme/nvme_qpair.o 00:07:47.225 CC lib/nvme/nvme.o 00:07:47.792 LIB libspdk_thread.a 00:07:47.792 SO libspdk_thread.so.11.0 00:07:47.792 CC lib/nvme/nvme_quirks.o 00:07:47.792 CC lib/nvme/nvme_transport.o 00:07:47.792 CC lib/nvme/nvme_discovery.o 00:07:47.792 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:07:47.792 SYMLINK libspdk_thread.so 00:07:47.792 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:07:47.792 CC lib/nvme/nvme_tcp.o 00:07:47.792 CC lib/nvme/nvme_opal.o 00:07:47.792 CC lib/nvme/nvme_io_msg.o 00:07:48.050 CC lib/accel/accel.o 00:07:48.309 CC lib/accel/accel_rpc.o 00:07:48.309 CC lib/accel/accel_sw.o 00:07:48.309 CC lib/nvme/nvme_poll_group.o 00:07:48.566 CC lib/nvme/nvme_zns.o 00:07:48.566 CC lib/blob/blobstore.o 00:07:48.566 CC lib/init/json_config.o 00:07:48.566 CC lib/virtio/virtio.o 00:07:48.566 CC lib/virtio/virtio_vhost_user.o 00:07:48.566 CC lib/fsdev/fsdev.o 00:07:48.824 CC lib/init/subsystem.o 00:07:48.824 CC lib/init/subsystem_rpc.o 00:07:48.824 CC lib/virtio/virtio_vfio_user.o 00:07:48.824 CC lib/virtio/virtio_pci.o 00:07:49.186 CC lib/blob/request.o 00:07:49.186 CC lib/init/rpc.o 00:07:49.186 CC lib/blob/zeroes.o 00:07:49.186 CC lib/blob/blob_bs_dev.o 00:07:49.186 LIB libspdk_accel.a 00:07:49.186 CC lib/fsdev/fsdev_io.o 00:07:49.186 CC lib/nvme/nvme_stubs.o 00:07:49.186 LIB libspdk_init.a 00:07:49.186 SO libspdk_accel.so.16.0 00:07:49.186 CC lib/fsdev/fsdev_rpc.o 00:07:49.186 LIB libspdk_virtio.a 00:07:49.186 CC lib/nvme/nvme_auth.o 00:07:49.186 SO libspdk_init.so.6.0 00:07:49.186 SYMLINK libspdk_accel.so 00:07:49.186 SO libspdk_virtio.so.7.0 00:07:49.186 SYMLINK libspdk_init.so 00:07:49.186 CC lib/nvme/nvme_cuse.o 00:07:49.457 CC lib/nvme/nvme_rdma.o 00:07:49.457 SYMLINK libspdk_virtio.so 00:07:49.457 LIB libspdk_fsdev.a 00:07:49.457 SO libspdk_fsdev.so.2.0 00:07:49.457 CC lib/bdev/bdev.o 00:07:49.457 CC lib/bdev/bdev_zone.o 00:07:49.457 CC lib/bdev/bdev_rpc.o 00:07:49.457 CC lib/event/app.o 00:07:49.457 SYMLINK libspdk_fsdev.so 00:07:49.457 CC lib/event/reactor.o 00:07:49.715 CC lib/bdev/part.o 00:07:49.715 CC lib/bdev/scsi_nvme.o 00:07:49.715 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:07:49.972 CC lib/event/log_rpc.o 00:07:49.972 CC lib/event/app_rpc.o 00:07:49.972 CC lib/event/scheduler_static.o 00:07:50.230 LIB libspdk_event.a 00:07:50.230 SO libspdk_event.so.14.0 00:07:50.230 SYMLINK libspdk_event.so 00:07:50.488 LIB libspdk_fuse_dispatcher.a 00:07:50.488 SO libspdk_fuse_dispatcher.so.1.0 00:07:50.488 SYMLINK libspdk_fuse_dispatcher.so 00:07:50.488 LIB libspdk_nvme.a 00:07:50.746 SO libspdk_nvme.so.15.0 00:07:51.002 SYMLINK libspdk_nvme.so 00:07:51.259 LIB libspdk_blob.a 00:07:51.260 SO libspdk_blob.so.12.0 00:07:51.260 SYMLINK libspdk_blob.so 00:07:51.823 CC lib/blobfs/blobfs.o 00:07:51.823 CC lib/blobfs/tree.o 00:07:51.823 CC lib/lvol/lvol.o 00:07:52.081 LIB libspdk_bdev.a 00:07:52.081 SO libspdk_bdev.so.17.0 00:07:52.339 SYMLINK libspdk_bdev.so 00:07:52.597 CC lib/nbd/nbd.o 00:07:52.597 CC lib/ftl/ftl_core.o 00:07:52.597 CC lib/ftl/ftl_init.o 00:07:52.597 CC lib/nbd/nbd_rpc.o 00:07:52.597 CC lib/nvmf/ctrlr.o 00:07:52.597 CC lib/ublk/ublk.o 00:07:52.597 CC lib/nvmf/ctrlr_discovery.o 00:07:52.597 CC lib/scsi/dev.o 00:07:52.597 LIB libspdk_blobfs.a 00:07:52.597 SO libspdk_blobfs.so.11.0 00:07:52.855 LIB libspdk_lvol.a 00:07:52.855 CC lib/scsi/lun.o 00:07:52.855 SYMLINK libspdk_blobfs.so 00:07:52.855 CC lib/scsi/port.o 00:07:52.855 SO libspdk_lvol.so.11.0 00:07:52.855 CC lib/ftl/ftl_layout.o 00:07:52.855 CC lib/ftl/ftl_debug.o 00:07:52.855 SYMLINK libspdk_lvol.so 00:07:52.855 CC lib/scsi/scsi.o 00:07:52.855 CC lib/ftl/ftl_io.o 00:07:52.855 CC lib/scsi/scsi_bdev.o 00:07:53.113 LIB libspdk_nbd.a 00:07:53.113 CC lib/ftl/ftl_sb.o 00:07:53.113 SO libspdk_nbd.so.7.0 00:07:53.113 CC lib/ublk/ublk_rpc.o 00:07:53.113 CC lib/ftl/ftl_l2p.o 00:07:53.113 CC lib/nvmf/ctrlr_bdev.o 00:07:53.113 SYMLINK libspdk_nbd.so 00:07:53.113 CC lib/ftl/ftl_l2p_flat.o 00:07:53.113 CC lib/scsi/scsi_pr.o 00:07:53.113 CC lib/scsi/scsi_rpc.o 00:07:53.113 CC lib/scsi/task.o 00:07:53.372 CC lib/ftl/ftl_nv_cache.o 00:07:53.372 LIB libspdk_ublk.a 00:07:53.372 SO libspdk_ublk.so.3.0 00:07:53.372 CC lib/ftl/ftl_band.o 00:07:53.372 CC lib/nvmf/subsystem.o 00:07:53.372 SYMLINK libspdk_ublk.so 00:07:53.372 CC lib/ftl/ftl_band_ops.o 00:07:53.372 CC lib/ftl/ftl_writer.o 00:07:53.372 CC lib/ftl/ftl_rq.o 00:07:53.372 CC lib/nvmf/nvmf.o 00:07:53.372 LIB libspdk_scsi.a 00:07:53.630 SO libspdk_scsi.so.9.0 00:07:53.630 CC lib/ftl/ftl_reloc.o 00:07:53.630 CC lib/nvmf/nvmf_rpc.o 00:07:53.630 SYMLINK libspdk_scsi.so 00:07:53.630 CC lib/nvmf/transport.o 00:07:53.630 CC lib/nvmf/tcp.o 00:07:53.630 CC lib/nvmf/stubs.o 00:07:53.888 CC lib/ftl/ftl_l2p_cache.o 00:07:53.888 CC lib/ftl/ftl_p2l.o 00:07:54.146 CC lib/nvmf/mdns_server.o 00:07:54.405 CC lib/ftl/ftl_p2l_log.o 00:07:54.405 CC lib/ftl/mngt/ftl_mngt.o 00:07:54.405 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:07:54.405 CC lib/iscsi/conn.o 00:07:54.405 CC lib/nvmf/rdma.o 00:07:54.405 CC lib/vhost/vhost.o 00:07:54.405 CC lib/vhost/vhost_rpc.o 00:07:54.664 CC lib/vhost/vhost_scsi.o 00:07:54.664 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:07:54.664 CC lib/nvmf/auth.o 00:07:54.664 CC lib/iscsi/init_grp.o 00:07:54.664 CC lib/iscsi/iscsi.o 00:07:54.664 CC lib/ftl/mngt/ftl_mngt_startup.o 00:07:54.922 CC lib/iscsi/param.o 00:07:54.922 CC lib/iscsi/portal_grp.o 00:07:54.922 CC lib/ftl/mngt/ftl_mngt_md.o 00:07:55.201 CC lib/ftl/mngt/ftl_mngt_misc.o 00:07:55.201 CC lib/iscsi/tgt_node.o 00:07:55.201 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:07:55.201 CC lib/iscsi/iscsi_subsystem.o 00:07:55.201 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:07:55.460 CC lib/vhost/vhost_blk.o 00:07:55.460 CC lib/vhost/rte_vhost_user.o 00:07:55.460 CC lib/ftl/mngt/ftl_mngt_band.o 00:07:55.460 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:07:55.460 CC lib/iscsi/iscsi_rpc.o 00:07:55.460 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:07:55.718 CC lib/iscsi/task.o 00:07:55.718 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:07:55.718 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:07:55.718 CC lib/ftl/utils/ftl_conf.o 00:07:55.718 CC lib/ftl/utils/ftl_md.o 00:07:55.977 CC lib/ftl/utils/ftl_mempool.o 00:07:55.977 CC lib/ftl/utils/ftl_bitmap.o 00:07:55.977 CC lib/ftl/utils/ftl_property.o 00:07:55.977 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:07:55.977 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:07:55.977 LIB libspdk_iscsi.a 00:07:55.977 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:07:56.237 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:07:56.237 SO libspdk_iscsi.so.8.0 00:07:56.237 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:07:56.237 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:07:56.237 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:07:56.237 CC lib/ftl/upgrade/ftl_sb_v3.o 00:07:56.237 CC lib/ftl/upgrade/ftl_sb_v5.o 00:07:56.497 CC lib/ftl/nvc/ftl_nvc_dev.o 00:07:56.497 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:07:56.497 LIB libspdk_vhost.a 00:07:56.497 LIB libspdk_nvmf.a 00:07:56.497 SYMLINK libspdk_iscsi.so 00:07:56.497 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:07:56.497 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:07:56.497 CC lib/ftl/base/ftl_base_dev.o 00:07:56.497 CC lib/ftl/base/ftl_base_bdev.o 00:07:56.497 SO libspdk_vhost.so.8.0 00:07:56.497 CC lib/ftl/ftl_trace.o 00:07:56.497 SO libspdk_nvmf.so.20.0 00:07:56.756 SYMLINK libspdk_vhost.so 00:07:56.756 SYMLINK libspdk_nvmf.so 00:07:56.756 LIB libspdk_ftl.a 00:07:57.324 SO libspdk_ftl.so.9.0 00:07:57.324 SYMLINK libspdk_ftl.so 00:07:57.897 CC module/env_dpdk/env_dpdk_rpc.o 00:07:57.897 CC module/accel/ioat/accel_ioat.o 00:07:57.897 CC module/scheduler/dynamic/scheduler_dynamic.o 00:07:57.897 CC module/accel/dsa/accel_dsa.o 00:07:57.897 CC module/accel/error/accel_error.o 00:07:57.897 CC module/accel/iaa/accel_iaa.o 00:07:57.897 CC module/sock/posix/posix.o 00:07:57.897 CC module/fsdev/aio/fsdev_aio.o 00:07:57.897 CC module/keyring/file/keyring.o 00:07:57.897 CC module/blob/bdev/blob_bdev.o 00:07:57.897 LIB libspdk_env_dpdk_rpc.a 00:07:58.165 SO libspdk_env_dpdk_rpc.so.6.0 00:07:58.165 SYMLINK libspdk_env_dpdk_rpc.so 00:07:58.165 CC module/accel/iaa/accel_iaa_rpc.o 00:07:58.165 CC module/keyring/file/keyring_rpc.o 00:07:58.165 CC module/accel/ioat/accel_ioat_rpc.o 00:07:58.165 LIB libspdk_scheduler_dynamic.a 00:07:58.165 CC module/accel/error/accel_error_rpc.o 00:07:58.165 SO libspdk_scheduler_dynamic.so.4.0 00:07:58.165 CC module/accel/dsa/accel_dsa_rpc.o 00:07:58.165 SYMLINK libspdk_scheduler_dynamic.so 00:07:58.165 LIB libspdk_blob_bdev.a 00:07:58.165 LIB libspdk_keyring_file.a 00:07:58.165 LIB libspdk_accel_iaa.a 00:07:58.165 LIB libspdk_accel_ioat.a 00:07:58.165 SO libspdk_blob_bdev.so.12.0 00:07:58.165 LIB libspdk_accel_error.a 00:07:58.165 SO libspdk_keyring_file.so.2.0 00:07:58.165 SO libspdk_accel_iaa.so.3.0 00:07:58.165 SO libspdk_accel_ioat.so.6.0 00:07:58.423 SO libspdk_accel_error.so.2.0 00:07:58.423 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:07:58.423 SYMLINK libspdk_blob_bdev.so 00:07:58.423 LIB libspdk_accel_dsa.a 00:07:58.423 CC module/fsdev/aio/fsdev_aio_rpc.o 00:07:58.423 SYMLINK libspdk_accel_ioat.so 00:07:58.423 SYMLINK libspdk_accel_iaa.so 00:07:58.423 SYMLINK libspdk_keyring_file.so 00:07:58.423 CC module/fsdev/aio/linux_aio_mgr.o 00:07:58.423 SO libspdk_accel_dsa.so.5.0 00:07:58.423 SYMLINK libspdk_accel_error.so 00:07:58.423 SYMLINK libspdk_accel_dsa.so 00:07:58.423 CC module/scheduler/gscheduler/gscheduler.o 00:07:58.423 LIB libspdk_scheduler_dpdk_governor.a 00:07:58.423 SO libspdk_scheduler_dpdk_governor.so.4.0 00:07:58.681 CC module/sock/uring/uring.o 00:07:58.681 CC module/keyring/linux/keyring.o 00:07:58.681 LIB libspdk_fsdev_aio.a 00:07:58.681 SYMLINK libspdk_scheduler_dpdk_governor.so 00:07:58.681 CC module/keyring/linux/keyring_rpc.o 00:07:58.681 LIB libspdk_scheduler_gscheduler.a 00:07:58.681 SO libspdk_fsdev_aio.so.1.0 00:07:58.681 SO libspdk_scheduler_gscheduler.so.4.0 00:07:58.681 CC module/bdev/delay/vbdev_delay.o 00:07:58.681 CC module/bdev/error/vbdev_error.o 00:07:58.681 LIB libspdk_sock_posix.a 00:07:58.681 SYMLINK libspdk_fsdev_aio.so 00:07:58.681 SYMLINK libspdk_scheduler_gscheduler.so 00:07:58.681 CC module/bdev/delay/vbdev_delay_rpc.o 00:07:58.681 CC module/blobfs/bdev/blobfs_bdev.o 00:07:58.681 LIB libspdk_keyring_linux.a 00:07:58.681 SO libspdk_sock_posix.so.6.0 00:07:58.681 CC module/bdev/gpt/gpt.o 00:07:58.681 SO libspdk_keyring_linux.so.1.0 00:07:58.939 SYMLINK libspdk_keyring_linux.so 00:07:58.939 SYMLINK libspdk_sock_posix.so 00:07:58.939 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:07:58.939 CC module/bdev/lvol/vbdev_lvol.o 00:07:58.939 CC module/bdev/malloc/bdev_malloc.o 00:07:58.939 CC module/bdev/gpt/vbdev_gpt.o 00:07:58.939 CC module/bdev/error/vbdev_error_rpc.o 00:07:58.939 CC module/bdev/null/bdev_null.o 00:07:58.939 CC module/bdev/nvme/bdev_nvme.o 00:07:58.939 LIB libspdk_bdev_delay.a 00:07:59.196 CC module/bdev/passthru/vbdev_passthru.o 00:07:59.196 SO libspdk_bdev_delay.so.6.0 00:07:59.196 LIB libspdk_blobfs_bdev.a 00:07:59.196 SO libspdk_blobfs_bdev.so.6.0 00:07:59.196 LIB libspdk_bdev_error.a 00:07:59.196 SYMLINK libspdk_bdev_delay.so 00:07:59.196 CC module/bdev/null/bdev_null_rpc.o 00:07:59.196 SO libspdk_bdev_error.so.6.0 00:07:59.196 SYMLINK libspdk_blobfs_bdev.so 00:07:59.196 LIB libspdk_sock_uring.a 00:07:59.196 SO libspdk_sock_uring.so.5.0 00:07:59.196 LIB libspdk_bdev_gpt.a 00:07:59.196 SYMLINK libspdk_bdev_error.so 00:07:59.196 CC module/bdev/malloc/bdev_malloc_rpc.o 00:07:59.196 SO libspdk_bdev_gpt.so.6.0 00:07:59.196 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:07:59.196 SYMLINK libspdk_sock_uring.so 00:07:59.196 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:07:59.453 SYMLINK libspdk_bdev_gpt.so 00:07:59.453 LIB libspdk_bdev_null.a 00:07:59.453 CC module/bdev/nvme/bdev_nvme_rpc.o 00:07:59.453 CC module/bdev/raid/bdev_raid.o 00:07:59.453 SO libspdk_bdev_null.so.6.0 00:07:59.453 LIB libspdk_bdev_malloc.a 00:07:59.453 CC module/bdev/split/vbdev_split.o 00:07:59.453 CC module/bdev/raid/bdev_raid_rpc.o 00:07:59.453 SO libspdk_bdev_malloc.so.6.0 00:07:59.453 SYMLINK libspdk_bdev_null.so 00:07:59.453 LIB libspdk_bdev_passthru.a 00:07:59.453 SO libspdk_bdev_passthru.so.6.0 00:07:59.453 CC module/bdev/zone_block/vbdev_zone_block.o 00:07:59.453 SYMLINK libspdk_bdev_malloc.so 00:07:59.453 CC module/bdev/nvme/nvme_rpc.o 00:07:59.710 SYMLINK libspdk_bdev_passthru.so 00:07:59.710 CC module/bdev/nvme/bdev_mdns_client.o 00:07:59.710 LIB libspdk_bdev_lvol.a 00:07:59.710 CC module/bdev/uring/bdev_uring.o 00:07:59.710 CC module/bdev/nvme/vbdev_opal.o 00:07:59.710 SO libspdk_bdev_lvol.so.6.0 00:07:59.710 CC module/bdev/split/vbdev_split_rpc.o 00:07:59.710 SYMLINK libspdk_bdev_lvol.so 00:07:59.968 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:07:59.968 LIB libspdk_bdev_split.a 00:07:59.968 SO libspdk_bdev_split.so.6.0 00:07:59.968 CC module/bdev/nvme/vbdev_opal_rpc.o 00:07:59.968 CC module/bdev/aio/bdev_aio.o 00:07:59.968 CC module/bdev/ftl/bdev_ftl.o 00:07:59.968 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:07:59.968 CC module/bdev/iscsi/bdev_iscsi.o 00:07:59.968 SYMLINK libspdk_bdev_split.so 00:07:59.968 CC module/bdev/uring/bdev_uring_rpc.o 00:07:59.968 LIB libspdk_bdev_zone_block.a 00:07:59.968 SO libspdk_bdev_zone_block.so.6.0 00:08:00.226 CC module/bdev/raid/bdev_raid_sb.o 00:08:00.226 SYMLINK libspdk_bdev_zone_block.so 00:08:00.226 CC module/bdev/raid/raid0.o 00:08:00.226 CC module/bdev/raid/raid1.o 00:08:00.226 CC module/bdev/virtio/bdev_virtio_scsi.o 00:08:00.226 LIB libspdk_bdev_uring.a 00:08:00.226 CC module/bdev/ftl/bdev_ftl_rpc.o 00:08:00.226 SO libspdk_bdev_uring.so.6.0 00:08:00.226 CC module/bdev/aio/bdev_aio_rpc.o 00:08:00.226 CC module/bdev/raid/concat.o 00:08:00.226 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:08:00.483 SYMLINK libspdk_bdev_uring.so 00:08:00.483 CC module/bdev/virtio/bdev_virtio_blk.o 00:08:00.483 LIB libspdk_bdev_aio.a 00:08:00.483 CC module/bdev/virtio/bdev_virtio_rpc.o 00:08:00.483 LIB libspdk_bdev_ftl.a 00:08:00.483 LIB libspdk_bdev_iscsi.a 00:08:00.483 SO libspdk_bdev_ftl.so.6.0 00:08:00.483 SO libspdk_bdev_aio.so.6.0 00:08:00.483 SO libspdk_bdev_iscsi.so.6.0 00:08:00.483 SYMLINK libspdk_bdev_ftl.so 00:08:00.483 LIB libspdk_bdev_raid.a 00:08:00.483 SYMLINK libspdk_bdev_aio.so 00:08:00.740 SYMLINK libspdk_bdev_iscsi.so 00:08:00.740 SO libspdk_bdev_raid.so.6.0 00:08:00.740 LIB libspdk_bdev_virtio.a 00:08:00.740 SYMLINK libspdk_bdev_raid.so 00:08:00.740 SO libspdk_bdev_virtio.so.6.0 00:08:00.740 SYMLINK libspdk_bdev_virtio.so 00:08:01.305 LIB libspdk_bdev_nvme.a 00:08:01.563 SO libspdk_bdev_nvme.so.7.1 00:08:01.563 SYMLINK libspdk_bdev_nvme.so 00:08:02.130 CC module/event/subsystems/keyring/keyring.o 00:08:02.130 CC module/event/subsystems/vmd/vmd.o 00:08:02.130 CC module/event/subsystems/vmd/vmd_rpc.o 00:08:02.130 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:08:02.130 CC module/event/subsystems/iobuf/iobuf.o 00:08:02.130 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:08:02.130 CC module/event/subsystems/fsdev/fsdev.o 00:08:02.130 CC module/event/subsystems/scheduler/scheduler.o 00:08:02.387 CC module/event/subsystems/sock/sock.o 00:08:02.387 LIB libspdk_event_vhost_blk.a 00:08:02.387 LIB libspdk_event_vmd.a 00:08:02.387 LIB libspdk_event_scheduler.a 00:08:02.387 LIB libspdk_event_fsdev.a 00:08:02.387 LIB libspdk_event_iobuf.a 00:08:02.387 LIB libspdk_event_keyring.a 00:08:02.387 LIB libspdk_event_sock.a 00:08:02.387 SO libspdk_event_vhost_blk.so.3.0 00:08:02.387 SO libspdk_event_scheduler.so.4.0 00:08:02.387 SO libspdk_event_fsdev.so.1.0 00:08:02.387 SO libspdk_event_vmd.so.6.0 00:08:02.387 SO libspdk_event_keyring.so.1.0 00:08:02.387 SO libspdk_event_iobuf.so.3.0 00:08:02.387 SO libspdk_event_sock.so.5.0 00:08:02.387 SYMLINK libspdk_event_scheduler.so 00:08:02.387 SYMLINK libspdk_event_vhost_blk.so 00:08:02.387 SYMLINK libspdk_event_fsdev.so 00:08:02.387 SYMLINK libspdk_event_vmd.so 00:08:02.387 SYMLINK libspdk_event_keyring.so 00:08:02.387 SYMLINK libspdk_event_sock.so 00:08:02.387 SYMLINK libspdk_event_iobuf.so 00:08:02.952 CC module/event/subsystems/accel/accel.o 00:08:02.952 LIB libspdk_event_accel.a 00:08:03.210 SO libspdk_event_accel.so.6.0 00:08:03.210 SYMLINK libspdk_event_accel.so 00:08:03.777 CC module/event/subsystems/bdev/bdev.o 00:08:03.777 LIB libspdk_event_bdev.a 00:08:03.777 SO libspdk_event_bdev.so.6.0 00:08:04.036 SYMLINK libspdk_event_bdev.so 00:08:04.294 CC module/event/subsystems/scsi/scsi.o 00:08:04.294 CC module/event/subsystems/ublk/ublk.o 00:08:04.294 CC module/event/subsystems/nbd/nbd.o 00:08:04.294 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:08:04.294 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:08:04.294 LIB libspdk_event_scsi.a 00:08:04.294 LIB libspdk_event_ublk.a 00:08:04.553 LIB libspdk_event_nbd.a 00:08:04.553 SO libspdk_event_scsi.so.6.0 00:08:04.553 SO libspdk_event_ublk.so.3.0 00:08:04.553 SO libspdk_event_nbd.so.6.0 00:08:04.553 SYMLINK libspdk_event_scsi.so 00:08:04.553 SYMLINK libspdk_event_ublk.so 00:08:04.553 LIB libspdk_event_nvmf.a 00:08:04.553 SYMLINK libspdk_event_nbd.so 00:08:04.553 SO libspdk_event_nvmf.so.6.0 00:08:04.553 SYMLINK libspdk_event_nvmf.so 00:08:04.812 CC module/event/subsystems/iscsi/iscsi.o 00:08:04.812 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:08:05.071 LIB libspdk_event_iscsi.a 00:08:05.071 LIB libspdk_event_vhost_scsi.a 00:08:05.071 SO libspdk_event_iscsi.so.6.0 00:08:05.071 SO libspdk_event_vhost_scsi.so.3.0 00:08:05.071 SYMLINK libspdk_event_iscsi.so 00:08:05.071 SYMLINK libspdk_event_vhost_scsi.so 00:08:05.329 SO libspdk.so.6.0 00:08:05.329 SYMLINK libspdk.so 00:08:05.588 CC test/rpc_client/rpc_client_test.o 00:08:05.588 TEST_HEADER include/spdk/accel.h 00:08:05.588 TEST_HEADER include/spdk/accel_module.h 00:08:05.588 TEST_HEADER include/spdk/assert.h 00:08:05.588 CC app/trace_record/trace_record.o 00:08:05.588 TEST_HEADER include/spdk/barrier.h 00:08:05.588 CXX app/trace/trace.o 00:08:05.588 TEST_HEADER include/spdk/base64.h 00:08:05.588 TEST_HEADER include/spdk/bdev.h 00:08:05.588 TEST_HEADER include/spdk/bdev_module.h 00:08:05.588 TEST_HEADER include/spdk/bdev_zone.h 00:08:05.588 TEST_HEADER include/spdk/bit_array.h 00:08:05.588 TEST_HEADER include/spdk/bit_pool.h 00:08:05.588 TEST_HEADER include/spdk/blob_bdev.h 00:08:05.588 TEST_HEADER include/spdk/blobfs_bdev.h 00:08:05.588 TEST_HEADER include/spdk/blobfs.h 00:08:05.588 TEST_HEADER include/spdk/blob.h 00:08:05.588 TEST_HEADER include/spdk/conf.h 00:08:05.588 TEST_HEADER include/spdk/config.h 00:08:05.588 TEST_HEADER include/spdk/cpuset.h 00:08:05.588 TEST_HEADER include/spdk/crc16.h 00:08:05.847 TEST_HEADER include/spdk/crc32.h 00:08:05.847 TEST_HEADER include/spdk/crc64.h 00:08:05.847 CC app/nvmf_tgt/nvmf_main.o 00:08:05.847 TEST_HEADER include/spdk/dif.h 00:08:05.847 TEST_HEADER include/spdk/dma.h 00:08:05.847 TEST_HEADER include/spdk/endian.h 00:08:05.847 TEST_HEADER include/spdk/env_dpdk.h 00:08:05.847 TEST_HEADER include/spdk/env.h 00:08:05.847 TEST_HEADER include/spdk/event.h 00:08:05.847 TEST_HEADER include/spdk/fd_group.h 00:08:05.847 TEST_HEADER include/spdk/fd.h 00:08:05.847 TEST_HEADER include/spdk/file.h 00:08:05.847 TEST_HEADER include/spdk/fsdev.h 00:08:05.847 TEST_HEADER include/spdk/fsdev_module.h 00:08:05.847 TEST_HEADER include/spdk/ftl.h 00:08:05.847 TEST_HEADER include/spdk/fuse_dispatcher.h 00:08:05.847 TEST_HEADER include/spdk/gpt_spec.h 00:08:05.847 TEST_HEADER include/spdk/hexlify.h 00:08:05.847 TEST_HEADER include/spdk/histogram_data.h 00:08:05.847 TEST_HEADER include/spdk/idxd.h 00:08:05.847 TEST_HEADER include/spdk/idxd_spec.h 00:08:05.847 TEST_HEADER include/spdk/init.h 00:08:05.847 TEST_HEADER include/spdk/ioat.h 00:08:05.847 TEST_HEADER include/spdk/ioat_spec.h 00:08:05.847 TEST_HEADER include/spdk/iscsi_spec.h 00:08:05.847 CC examples/util/zipf/zipf.o 00:08:05.847 TEST_HEADER include/spdk/json.h 00:08:05.847 TEST_HEADER include/spdk/jsonrpc.h 00:08:05.847 CC test/thread/poller_perf/poller_perf.o 00:08:05.847 TEST_HEADER include/spdk/keyring.h 00:08:05.847 TEST_HEADER include/spdk/keyring_module.h 00:08:05.847 TEST_HEADER include/spdk/likely.h 00:08:05.847 TEST_HEADER include/spdk/log.h 00:08:05.847 TEST_HEADER include/spdk/lvol.h 00:08:05.847 TEST_HEADER include/spdk/md5.h 00:08:05.847 CC test/app/bdev_svc/bdev_svc.o 00:08:05.847 TEST_HEADER include/spdk/memory.h 00:08:05.847 TEST_HEADER include/spdk/mmio.h 00:08:05.847 CC test/dma/test_dma/test_dma.o 00:08:05.847 TEST_HEADER include/spdk/nbd.h 00:08:05.847 TEST_HEADER include/spdk/net.h 00:08:05.847 TEST_HEADER include/spdk/notify.h 00:08:05.847 TEST_HEADER include/spdk/nvme.h 00:08:05.847 TEST_HEADER include/spdk/nvme_intel.h 00:08:05.847 TEST_HEADER include/spdk/nvme_ocssd.h 00:08:05.847 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:08:05.847 TEST_HEADER include/spdk/nvme_spec.h 00:08:05.847 TEST_HEADER include/spdk/nvme_zns.h 00:08:05.847 TEST_HEADER include/spdk/nvmf_cmd.h 00:08:05.847 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:08:05.847 TEST_HEADER include/spdk/nvmf.h 00:08:05.847 TEST_HEADER include/spdk/nvmf_spec.h 00:08:05.847 TEST_HEADER include/spdk/nvmf_transport.h 00:08:05.847 TEST_HEADER include/spdk/opal.h 00:08:05.847 TEST_HEADER include/spdk/opal_spec.h 00:08:05.847 TEST_HEADER include/spdk/pci_ids.h 00:08:05.847 TEST_HEADER include/spdk/pipe.h 00:08:05.847 TEST_HEADER include/spdk/queue.h 00:08:05.847 TEST_HEADER include/spdk/reduce.h 00:08:05.847 TEST_HEADER include/spdk/rpc.h 00:08:05.847 TEST_HEADER include/spdk/scheduler.h 00:08:05.847 TEST_HEADER include/spdk/scsi.h 00:08:05.847 TEST_HEADER include/spdk/scsi_spec.h 00:08:05.847 TEST_HEADER include/spdk/sock.h 00:08:05.847 TEST_HEADER include/spdk/stdinc.h 00:08:05.847 LINK rpc_client_test 00:08:05.847 TEST_HEADER include/spdk/string.h 00:08:05.847 CC test/env/mem_callbacks/mem_callbacks.o 00:08:05.847 TEST_HEADER include/spdk/thread.h 00:08:05.847 TEST_HEADER include/spdk/trace.h 00:08:05.847 TEST_HEADER include/spdk/trace_parser.h 00:08:05.847 TEST_HEADER include/spdk/tree.h 00:08:05.847 TEST_HEADER include/spdk/ublk.h 00:08:05.847 TEST_HEADER include/spdk/util.h 00:08:05.847 TEST_HEADER include/spdk/uuid.h 00:08:05.847 TEST_HEADER include/spdk/version.h 00:08:05.847 TEST_HEADER include/spdk/vfio_user_pci.h 00:08:05.847 LINK nvmf_tgt 00:08:05.847 TEST_HEADER include/spdk/vfio_user_spec.h 00:08:05.847 TEST_HEADER include/spdk/vhost.h 00:08:05.847 TEST_HEADER include/spdk/vmd.h 00:08:05.847 LINK poller_perf 00:08:05.847 LINK spdk_trace_record 00:08:05.847 TEST_HEADER include/spdk/xor.h 00:08:05.847 TEST_HEADER include/spdk/zipf.h 00:08:05.847 CXX test/cpp_headers/accel.o 00:08:05.847 LINK zipf 00:08:06.106 LINK bdev_svc 00:08:06.106 LINK spdk_trace 00:08:06.106 CXX test/cpp_headers/accel_module.o 00:08:06.106 CC test/env/vtophys/vtophys.o 00:08:06.106 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:08:06.364 CXX test/cpp_headers/assert.o 00:08:06.364 LINK test_dma 00:08:06.364 CC examples/ioat/perf/perf.o 00:08:06.364 CC examples/vmd/lsvmd/lsvmd.o 00:08:06.364 LINK vtophys 00:08:06.364 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:08:06.364 LINK env_dpdk_post_init 00:08:06.364 CC app/iscsi_tgt/iscsi_tgt.o 00:08:06.364 LINK mem_callbacks 00:08:06.364 CXX test/cpp_headers/barrier.o 00:08:06.364 LINK lsvmd 00:08:06.364 CC test/event/event_perf/event_perf.o 00:08:06.364 LINK ioat_perf 00:08:06.623 CC test/event/reactor/reactor.o 00:08:06.623 LINK iscsi_tgt 00:08:06.623 LINK event_perf 00:08:06.623 CXX test/cpp_headers/base64.o 00:08:06.623 CC test/env/memory/memory_ut.o 00:08:06.623 LINK nvme_fuzz 00:08:06.623 CC examples/idxd/perf/perf.o 00:08:06.623 LINK reactor 00:08:06.623 CC test/accel/dif/dif.o 00:08:06.623 CC examples/vmd/led/led.o 00:08:06.623 CC examples/ioat/verify/verify.o 00:08:06.623 CXX test/cpp_headers/bdev.o 00:08:06.881 LINK led 00:08:06.882 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:08:06.882 CC test/event/reactor_perf/reactor_perf.o 00:08:06.882 CXX test/cpp_headers/bdev_module.o 00:08:06.882 CC app/spdk_tgt/spdk_tgt.o 00:08:06.882 CC test/blobfs/mkfs/mkfs.o 00:08:06.882 LINK verify 00:08:06.882 LINK idxd_perf 00:08:07.140 CXX test/cpp_headers/bdev_zone.o 00:08:07.140 LINK reactor_perf 00:08:07.140 LINK spdk_tgt 00:08:07.140 LINK mkfs 00:08:07.140 CC test/env/pci/pci_ut.o 00:08:07.140 CXX test/cpp_headers/bit_array.o 00:08:07.140 CC examples/interrupt_tgt/interrupt_tgt.o 00:08:07.140 LINK dif 00:08:07.399 CC test/event/app_repeat/app_repeat.o 00:08:07.399 CXX test/cpp_headers/bit_pool.o 00:08:07.399 CC test/lvol/esnap/esnap.o 00:08:07.399 CC app/spdk_lspci/spdk_lspci.o 00:08:07.399 LINK app_repeat 00:08:07.399 LINK interrupt_tgt 00:08:07.399 CXX test/cpp_headers/blob_bdev.o 00:08:07.657 LINK spdk_lspci 00:08:07.657 LINK pci_ut 00:08:07.657 CC test/nvme/aer/aer.o 00:08:07.657 CC test/event/scheduler/scheduler.o 00:08:07.657 LINK memory_ut 00:08:07.657 CXX test/cpp_headers/blobfs_bdev.o 00:08:07.657 CXX test/cpp_headers/blobfs.o 00:08:07.657 CC test/bdev/bdevio/bdevio.o 00:08:07.657 CC app/spdk_nvme_perf/perf.o 00:08:07.915 CC examples/thread/thread/thread_ex.o 00:08:07.915 LINK scheduler 00:08:07.915 LINK aer 00:08:07.915 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:08:07.915 CXX test/cpp_headers/blob.o 00:08:07.915 CC test/nvme/reset/reset.o 00:08:07.915 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:08:07.915 LINK thread 00:08:08.175 CC test/nvme/sgl/sgl.o 00:08:08.175 CXX test/cpp_headers/conf.o 00:08:08.175 LINK bdevio 00:08:08.175 LINK reset 00:08:08.175 CC examples/sock/hello_world/hello_sock.o 00:08:08.175 CXX test/cpp_headers/config.o 00:08:08.175 CXX test/cpp_headers/cpuset.o 00:08:08.175 CC test/app/histogram_perf/histogram_perf.o 00:08:08.175 LINK iscsi_fuzz 00:08:08.175 LINK sgl 00:08:08.433 LINK vhost_fuzz 00:08:08.433 CC test/nvme/e2edp/nvme_dp.o 00:08:08.433 CC test/app/jsoncat/jsoncat.o 00:08:08.433 CXX test/cpp_headers/crc16.o 00:08:08.433 LINK hello_sock 00:08:08.433 LINK histogram_perf 00:08:08.433 LINK spdk_nvme_perf 00:08:08.433 CC test/nvme/overhead/overhead.o 00:08:08.433 LINK jsoncat 00:08:08.433 CC test/nvme/err_injection/err_injection.o 00:08:08.691 CC test/nvme/startup/startup.o 00:08:08.691 CXX test/cpp_headers/crc32.o 00:08:08.691 CC test/nvme/reserve/reserve.o 00:08:08.691 LINK nvme_dp 00:08:08.691 CC examples/accel/perf/accel_perf.o 00:08:08.691 LINK err_injection 00:08:08.691 CXX test/cpp_headers/crc64.o 00:08:08.691 LINK startup 00:08:08.691 CC test/app/stub/stub.o 00:08:08.691 CC app/spdk_nvme_identify/identify.o 00:08:08.691 LINK overhead 00:08:08.949 LINK reserve 00:08:08.949 CC test/nvme/simple_copy/simple_copy.o 00:08:08.949 CXX test/cpp_headers/dif.o 00:08:08.949 LINK stub 00:08:08.949 CC test/nvme/connect_stress/connect_stress.o 00:08:08.949 CC test/nvme/boot_partition/boot_partition.o 00:08:08.949 CC test/nvme/compliance/nvme_compliance.o 00:08:08.949 CXX test/cpp_headers/dma.o 00:08:09.208 CC test/nvme/fused_ordering/fused_ordering.o 00:08:09.208 CXX test/cpp_headers/endian.o 00:08:09.208 LINK simple_copy 00:08:09.208 LINK accel_perf 00:08:09.208 LINK connect_stress 00:08:09.208 LINK boot_partition 00:08:09.208 CXX test/cpp_headers/env_dpdk.o 00:08:09.208 LINK fused_ordering 00:08:09.465 LINK nvme_compliance 00:08:09.465 CC test/nvme/doorbell_aers/doorbell_aers.o 00:08:09.465 CC test/nvme/fdp/fdp.o 00:08:09.465 CC test/nvme/cuse/cuse.o 00:08:09.465 CXX test/cpp_headers/env.o 00:08:09.465 CXX test/cpp_headers/event.o 00:08:09.465 CC app/spdk_nvme_discover/discovery_aer.o 00:08:09.465 LINK spdk_nvme_identify 00:08:09.465 CXX test/cpp_headers/fd_group.o 00:08:09.465 CC examples/blob/hello_world/hello_blob.o 00:08:09.465 LINK doorbell_aers 00:08:09.724 LINK spdk_nvme_discover 00:08:09.724 LINK fdp 00:08:09.724 CXX test/cpp_headers/fd.o 00:08:09.724 CC examples/blob/cli/blobcli.o 00:08:09.724 CC app/spdk_top/spdk_top.o 00:08:09.724 LINK hello_blob 00:08:09.724 CC examples/nvme/hello_world/hello_world.o 00:08:09.724 CC examples/nvme/reconnect/reconnect.o 00:08:09.724 CXX test/cpp_headers/file.o 00:08:09.982 CC examples/nvme/nvme_manage/nvme_manage.o 00:08:09.982 CC examples/nvme/arbitration/arbitration.o 00:08:09.982 LINK hello_world 00:08:09.983 CC examples/nvme/hotplug/hotplug.o 00:08:09.983 CXX test/cpp_headers/fsdev.o 00:08:09.983 LINK reconnect 00:08:10.241 LINK blobcli 00:08:10.241 CXX test/cpp_headers/fsdev_module.o 00:08:10.241 LINK arbitration 00:08:10.241 CC examples/nvme/cmb_copy/cmb_copy.o 00:08:10.241 LINK hotplug 00:08:10.241 CXX test/cpp_headers/ftl.o 00:08:10.241 LINK nvme_manage 00:08:10.501 LINK cmb_copy 00:08:10.501 CXX test/cpp_headers/fuse_dispatcher.o 00:08:10.501 CC examples/nvme/abort/abort.o 00:08:10.501 CC app/vhost/vhost.o 00:08:10.501 CC app/spdk_dd/spdk_dd.o 00:08:10.501 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:08:10.501 LINK spdk_top 00:08:10.501 LINK cuse 00:08:10.501 CXX test/cpp_headers/gpt_spec.o 00:08:10.760 LINK vhost 00:08:10.760 LINK pmr_persistence 00:08:10.760 CC examples/fsdev/hello_world/hello_fsdev.o 00:08:10.760 CXX test/cpp_headers/hexlify.o 00:08:10.760 CC examples/bdev/hello_world/hello_bdev.o 00:08:10.760 CXX test/cpp_headers/histogram_data.o 00:08:10.760 LINK abort 00:08:10.760 CC examples/bdev/bdevperf/bdevperf.o 00:08:10.760 CXX test/cpp_headers/idxd.o 00:08:10.760 LINK spdk_dd 00:08:10.760 LINK hello_fsdev 00:08:11.021 CC app/fio/nvme/fio_plugin.o 00:08:11.021 CXX test/cpp_headers/idxd_spec.o 00:08:11.021 CXX test/cpp_headers/init.o 00:08:11.021 LINK hello_bdev 00:08:11.021 CXX test/cpp_headers/ioat.o 00:08:11.021 CC app/fio/bdev/fio_plugin.o 00:08:11.021 CXX test/cpp_headers/ioat_spec.o 00:08:11.021 CXX test/cpp_headers/iscsi_spec.o 00:08:11.021 CXX test/cpp_headers/json.o 00:08:11.021 CXX test/cpp_headers/jsonrpc.o 00:08:11.021 CXX test/cpp_headers/keyring.o 00:08:11.283 CXX test/cpp_headers/keyring_module.o 00:08:11.283 CXX test/cpp_headers/likely.o 00:08:11.283 CXX test/cpp_headers/log.o 00:08:11.283 CXX test/cpp_headers/lvol.o 00:08:11.284 CXX test/cpp_headers/md5.o 00:08:11.284 CXX test/cpp_headers/memory.o 00:08:11.284 CXX test/cpp_headers/mmio.o 00:08:11.284 CXX test/cpp_headers/nbd.o 00:08:11.284 CXX test/cpp_headers/net.o 00:08:11.284 CXX test/cpp_headers/notify.o 00:08:11.284 LINK spdk_nvme 00:08:11.284 CXX test/cpp_headers/nvme.o 00:08:11.543 CXX test/cpp_headers/nvme_intel.o 00:08:11.543 LINK spdk_bdev 00:08:11.543 CXX test/cpp_headers/nvme_ocssd.o 00:08:11.543 CXX test/cpp_headers/nvme_ocssd_spec.o 00:08:11.543 CXX test/cpp_headers/nvme_spec.o 00:08:11.543 LINK bdevperf 00:08:11.543 CXX test/cpp_headers/nvme_zns.o 00:08:11.543 CXX test/cpp_headers/nvmf_cmd.o 00:08:11.543 CXX test/cpp_headers/nvmf_fc_spec.o 00:08:11.543 CXX test/cpp_headers/nvmf.o 00:08:11.543 CXX test/cpp_headers/nvmf_spec.o 00:08:11.543 CXX test/cpp_headers/nvmf_transport.o 00:08:11.543 CXX test/cpp_headers/opal.o 00:08:11.543 CXX test/cpp_headers/opal_spec.o 00:08:11.803 CXX test/cpp_headers/pci_ids.o 00:08:11.803 CXX test/cpp_headers/pipe.o 00:08:11.803 CXX test/cpp_headers/queue.o 00:08:11.803 CXX test/cpp_headers/reduce.o 00:08:11.803 CXX test/cpp_headers/rpc.o 00:08:11.803 CXX test/cpp_headers/scheduler.o 00:08:11.803 CXX test/cpp_headers/scsi.o 00:08:11.803 CXX test/cpp_headers/scsi_spec.o 00:08:11.803 CXX test/cpp_headers/sock.o 00:08:11.803 LINK esnap 00:08:11.803 CXX test/cpp_headers/stdinc.o 00:08:11.803 CC examples/nvmf/nvmf/nvmf.o 00:08:11.803 CXX test/cpp_headers/string.o 00:08:11.803 CXX test/cpp_headers/thread.o 00:08:11.803 CXX test/cpp_headers/trace.o 00:08:11.803 CXX test/cpp_headers/trace_parser.o 00:08:12.063 CXX test/cpp_headers/tree.o 00:08:12.063 CXX test/cpp_headers/ublk.o 00:08:12.063 CXX test/cpp_headers/util.o 00:08:12.063 CXX test/cpp_headers/uuid.o 00:08:12.063 CXX test/cpp_headers/version.o 00:08:12.063 CXX test/cpp_headers/vfio_user_pci.o 00:08:12.063 CXX test/cpp_headers/vfio_user_spec.o 00:08:12.063 CXX test/cpp_headers/vhost.o 00:08:12.063 CXX test/cpp_headers/vmd.o 00:08:12.063 CXX test/cpp_headers/xor.o 00:08:12.063 CXX test/cpp_headers/zipf.o 00:08:12.063 LINK nvmf 00:08:12.323 00:08:12.323 real 1m22.251s 00:08:12.323 user 7m5.175s 00:08:12.323 sys 1m54.352s 00:08:12.323 09:18:49 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:08:12.323 09:18:49 make -- common/autotest_common.sh@10 -- $ set +x 00:08:12.323 ************************************ 00:08:12.323 END TEST make 00:08:12.323 ************************************ 00:08:12.323 09:18:49 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:08:12.323 09:18:49 -- pm/common@29 -- $ signal_monitor_resources TERM 00:08:12.323 09:18:49 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:08:12.323 09:18:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:12.323 09:18:49 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:08:12.323 09:18:49 -- pm/common@44 -- $ pid=5259 00:08:12.323 09:18:49 -- pm/common@50 -- $ kill -TERM 5259 00:08:12.323 09:18:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:12.323 09:18:49 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:08:12.323 09:18:49 -- pm/common@44 -- $ pid=5261 00:08:12.323 09:18:49 -- pm/common@50 -- $ kill -TERM 5261 00:08:12.323 09:18:49 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:08:12.323 09:18:49 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:08:12.583 09:18:50 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:12.583 09:18:50 -- common/autotest_common.sh@1711 -- # lcov --version 00:08:12.583 09:18:50 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:12.583 09:18:50 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:12.583 09:18:50 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:12.583 09:18:50 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:12.583 09:18:50 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:12.583 09:18:50 -- scripts/common.sh@336 -- # IFS=.-: 00:08:12.583 09:18:50 -- scripts/common.sh@336 -- # read -ra ver1 00:08:12.583 09:18:50 -- scripts/common.sh@337 -- # IFS=.-: 00:08:12.583 09:18:50 -- scripts/common.sh@337 -- # read -ra ver2 00:08:12.583 09:18:50 -- scripts/common.sh@338 -- # local 'op=<' 00:08:12.583 09:18:50 -- scripts/common.sh@340 -- # ver1_l=2 00:08:12.583 09:18:50 -- scripts/common.sh@341 -- # ver2_l=1 00:08:12.583 09:18:50 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:12.583 09:18:50 -- scripts/common.sh@344 -- # case "$op" in 00:08:12.583 09:18:50 -- scripts/common.sh@345 -- # : 1 00:08:12.583 09:18:50 -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:12.583 09:18:50 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:12.583 09:18:50 -- scripts/common.sh@365 -- # decimal 1 00:08:12.583 09:18:50 -- scripts/common.sh@353 -- # local d=1 00:08:12.583 09:18:50 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:12.583 09:18:50 -- scripts/common.sh@355 -- # echo 1 00:08:12.583 09:18:50 -- scripts/common.sh@365 -- # ver1[v]=1 00:08:12.583 09:18:50 -- scripts/common.sh@366 -- # decimal 2 00:08:12.583 09:18:50 -- scripts/common.sh@353 -- # local d=2 00:08:12.583 09:18:50 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:12.583 09:18:50 -- scripts/common.sh@355 -- # echo 2 00:08:12.583 09:18:50 -- scripts/common.sh@366 -- # ver2[v]=2 00:08:12.583 09:18:50 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:12.583 09:18:50 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:12.583 09:18:50 -- scripts/common.sh@368 -- # return 0 00:08:12.583 09:18:50 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:12.583 09:18:50 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:12.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.583 --rc genhtml_branch_coverage=1 00:08:12.583 --rc genhtml_function_coverage=1 00:08:12.583 --rc genhtml_legend=1 00:08:12.583 --rc geninfo_all_blocks=1 00:08:12.583 --rc geninfo_unexecuted_blocks=1 00:08:12.583 00:08:12.583 ' 00:08:12.583 09:18:50 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:12.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.583 --rc genhtml_branch_coverage=1 00:08:12.583 --rc genhtml_function_coverage=1 00:08:12.583 --rc genhtml_legend=1 00:08:12.583 --rc geninfo_all_blocks=1 00:08:12.583 --rc geninfo_unexecuted_blocks=1 00:08:12.583 00:08:12.583 ' 00:08:12.583 09:18:50 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:12.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.583 --rc genhtml_branch_coverage=1 00:08:12.583 --rc genhtml_function_coverage=1 00:08:12.583 --rc genhtml_legend=1 00:08:12.583 --rc geninfo_all_blocks=1 00:08:12.583 --rc geninfo_unexecuted_blocks=1 00:08:12.583 00:08:12.583 ' 00:08:12.583 09:18:50 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:12.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.583 --rc genhtml_branch_coverage=1 00:08:12.583 --rc genhtml_function_coverage=1 00:08:12.583 --rc genhtml_legend=1 00:08:12.583 --rc geninfo_all_blocks=1 00:08:12.583 --rc geninfo_unexecuted_blocks=1 00:08:12.583 00:08:12.583 ' 00:08:12.583 09:18:50 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:12.583 09:18:50 -- nvmf/common.sh@7 -- # uname -s 00:08:12.583 09:18:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:12.583 09:18:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:12.583 09:18:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:12.583 09:18:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:12.583 09:18:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:12.583 09:18:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:12.583 09:18:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:12.583 09:18:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:12.583 09:18:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:12.583 09:18:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:12.583 09:18:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:08:12.583 09:18:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:08:12.583 09:18:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:12.583 09:18:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:12.583 09:18:50 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:12.583 09:18:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:12.583 09:18:50 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:12.583 09:18:50 -- scripts/common.sh@15 -- # shopt -s extglob 00:08:12.583 09:18:50 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:12.583 09:18:50 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:12.583 09:18:50 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:12.583 09:18:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.583 09:18:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.583 09:18:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.583 09:18:50 -- paths/export.sh@5 -- # export PATH 00:08:12.584 09:18:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.584 09:18:50 -- nvmf/common.sh@51 -- # : 0 00:08:12.584 09:18:50 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:12.584 09:18:50 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:12.584 09:18:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:12.584 09:18:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:12.584 09:18:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:12.584 09:18:50 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:12.584 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:12.584 09:18:50 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:12.584 09:18:50 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:12.584 09:18:50 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:12.584 09:18:50 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:08:12.584 09:18:50 -- spdk/autotest.sh@32 -- # uname -s 00:08:12.584 09:18:50 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:08:12.584 09:18:50 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:08:12.584 09:18:50 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:08:12.584 09:18:50 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:08:12.584 09:18:50 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:08:12.584 09:18:50 -- spdk/autotest.sh@44 -- # modprobe nbd 00:08:12.584 09:18:50 -- spdk/autotest.sh@46 -- # type -P udevadm 00:08:12.584 09:18:50 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:08:12.584 09:18:50 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:08:12.584 09:18:50 -- spdk/autotest.sh@48 -- # udevadm_pid=54314 00:08:12.584 09:18:50 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:08:12.584 09:18:50 -- pm/common@17 -- # local monitor 00:08:12.584 09:18:50 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:08:12.584 09:18:50 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:08:12.584 09:18:50 -- pm/common@25 -- # sleep 1 00:08:12.584 09:18:50 -- pm/common@21 -- # date +%s 00:08:12.584 09:18:50 -- pm/common@21 -- # date +%s 00:08:12.584 09:18:50 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733735930 00:08:12.844 09:18:50 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733735930 00:08:12.844 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733735930_collect-vmstat.pm.log 00:08:12.844 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733735930_collect-cpu-load.pm.log 00:08:13.785 09:18:51 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:08:13.785 09:18:51 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:08:13.785 09:18:51 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:13.785 09:18:51 -- common/autotest_common.sh@10 -- # set +x 00:08:13.785 09:18:51 -- spdk/autotest.sh@59 -- # create_test_list 00:08:13.785 09:18:51 -- common/autotest_common.sh@752 -- # xtrace_disable 00:08:13.785 09:18:51 -- common/autotest_common.sh@10 -- # set +x 00:08:13.785 09:18:51 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:08:13.785 09:18:51 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:08:13.785 09:18:51 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:08:13.785 09:18:51 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:08:13.785 09:18:51 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:08:13.785 09:18:51 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:08:13.785 09:18:51 -- common/autotest_common.sh@1457 -- # uname 00:08:13.785 09:18:51 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:08:13.785 09:18:51 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:08:13.785 09:18:51 -- common/autotest_common.sh@1477 -- # uname 00:08:13.785 09:18:51 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:08:13.785 09:18:51 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:08:13.785 09:18:51 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:08:13.785 lcov: LCOV version 1.15 00:08:13.785 09:18:51 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:08:28.668 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:08:28.668 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:08:46.757 09:19:22 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:08:46.757 09:19:22 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:46.757 09:19:22 -- common/autotest_common.sh@10 -- # set +x 00:08:46.757 09:19:22 -- spdk/autotest.sh@78 -- # rm -f 00:08:46.757 09:19:22 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:46.757 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:46.757 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:08:46.757 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:08:46.757 09:19:23 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:08:46.757 09:19:23 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:08:46.757 09:19:23 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:08:46.757 09:19:23 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:08:46.757 09:19:23 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:08:46.757 09:19:23 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:08:46.757 09:19:23 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:08:46.757 09:19:23 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:08:46.757 09:19:23 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:08:46.757 09:19:23 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:08:46.757 09:19:23 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:08:46.757 09:19:23 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:46.757 09:19:23 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:46.757 09:19:23 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:08:46.757 09:19:23 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:08:46.757 09:19:23 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:08:46.757 09:19:23 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:08:46.757 09:19:23 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:08:46.757 09:19:23 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:46.757 09:19:23 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:46.757 09:19:23 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:08:46.757 09:19:23 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:08:46.757 09:19:23 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:08:46.757 09:19:23 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:08:46.757 09:19:23 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:46.757 09:19:23 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:08:46.757 09:19:23 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:08:46.757 09:19:23 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:08:46.757 09:19:23 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:08:46.757 09:19:23 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:46.757 09:19:23 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:08:46.757 09:19:23 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:46.757 09:19:23 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:46.757 09:19:23 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:08:46.757 09:19:23 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:08:46.757 09:19:23 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:08:46.757 No valid GPT data, bailing 00:08:46.757 09:19:23 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:08:46.757 09:19:23 -- scripts/common.sh@394 -- # pt= 00:08:46.757 09:19:23 -- scripts/common.sh@395 -- # return 1 00:08:46.757 09:19:23 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:08:46.757 1+0 records in 00:08:46.757 1+0 records out 00:08:46.757 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102106 s, 103 MB/s 00:08:46.757 09:19:23 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:46.757 09:19:23 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:46.757 09:19:23 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:08:46.757 09:19:23 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:08:46.757 09:19:23 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:08:46.757 No valid GPT data, bailing 00:08:46.757 09:19:23 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:08:46.757 09:19:23 -- scripts/common.sh@394 -- # pt= 00:08:46.757 09:19:23 -- scripts/common.sh@395 -- # return 1 00:08:46.757 09:19:23 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:08:46.757 1+0 records in 00:08:46.757 1+0 records out 00:08:46.757 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00586086 s, 179 MB/s 00:08:46.757 09:19:23 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:46.757 09:19:23 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:46.757 09:19:23 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:08:46.757 09:19:23 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:08:46.757 09:19:23 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:08:46.757 No valid GPT data, bailing 00:08:46.757 09:19:23 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:08:46.757 09:19:23 -- scripts/common.sh@394 -- # pt= 00:08:46.757 09:19:23 -- scripts/common.sh@395 -- # return 1 00:08:46.757 09:19:23 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:08:46.757 1+0 records in 00:08:46.757 1+0 records out 00:08:46.757 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00429599 s, 244 MB/s 00:08:46.757 09:19:23 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:46.757 09:19:23 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:46.757 09:19:23 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:08:46.757 09:19:23 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:08:46.757 09:19:23 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:08:46.758 No valid GPT data, bailing 00:08:46.758 09:19:23 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:08:46.758 09:19:23 -- scripts/common.sh@394 -- # pt= 00:08:46.758 09:19:23 -- scripts/common.sh@395 -- # return 1 00:08:46.758 09:19:23 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:08:46.758 1+0 records in 00:08:46.758 1+0 records out 00:08:46.758 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00448607 s, 234 MB/s 00:08:46.758 09:19:23 -- spdk/autotest.sh@105 -- # sync 00:08:46.758 09:19:23 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:08:46.758 09:19:23 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:08:46.758 09:19:23 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:08:49.287 09:19:26 -- spdk/autotest.sh@111 -- # uname -s 00:08:49.287 09:19:26 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:08:49.287 09:19:26 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:08:49.287 09:19:26 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:08:49.546 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:49.546 Hugepages 00:08:49.546 node hugesize free / total 00:08:49.546 node0 1048576kB 0 / 0 00:08:49.546 node0 2048kB 0 / 0 00:08:49.546 00:08:49.546 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:49.805 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:08:49.805 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:08:50.062 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:08:50.062 09:19:27 -- spdk/autotest.sh@117 -- # uname -s 00:08:50.062 09:19:27 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:08:50.062 09:19:27 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:08:50.062 09:19:27 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:50.998 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:50.998 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:50.998 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:50.998 09:19:28 -- common/autotest_common.sh@1517 -- # sleep 1 00:08:52.433 09:19:29 -- common/autotest_common.sh@1518 -- # bdfs=() 00:08:52.433 09:19:29 -- common/autotest_common.sh@1518 -- # local bdfs 00:08:52.433 09:19:29 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:08:52.433 09:19:29 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:08:52.433 09:19:29 -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:52.433 09:19:29 -- common/autotest_common.sh@1498 -- # local bdfs 00:08:52.433 09:19:29 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:52.433 09:19:29 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:52.433 09:19:29 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:52.433 09:19:29 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:08:52.433 09:19:29 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:08:52.433 09:19:29 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:52.693 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:52.693 Waiting for block devices as requested 00:08:52.952 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:52.952 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:52.953 09:19:30 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:08:52.953 09:19:30 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:08:52.953 09:19:30 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:08:52.953 09:19:30 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:08:52.953 09:19:30 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:08:52.953 09:19:30 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:08:52.953 09:19:30 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:08:52.953 09:19:30 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:08:52.953 09:19:30 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:08:52.953 09:19:30 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:08:52.953 09:19:30 -- common/autotest_common.sh@1531 -- # grep oacs 00:08:52.953 09:19:30 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:08:52.953 09:19:30 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:08:52.953 09:19:30 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:08:52.953 09:19:30 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:08:52.953 09:19:30 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:08:52.953 09:19:30 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:08:52.953 09:19:30 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:08:52.953 09:19:30 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:08:53.212 09:19:30 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:08:53.212 09:19:30 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:08:53.212 09:19:30 -- common/autotest_common.sh@1543 -- # continue 00:08:53.212 09:19:30 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:08:53.212 09:19:30 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:08:53.212 09:19:30 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:08:53.212 09:19:30 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:08:53.212 09:19:30 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:08:53.212 09:19:30 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:08:53.212 09:19:30 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:08:53.212 09:19:30 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:08:53.212 09:19:30 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:08:53.212 09:19:30 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:08:53.212 09:19:30 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:08:53.212 09:19:30 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:08:53.212 09:19:30 -- common/autotest_common.sh@1531 -- # grep oacs 00:08:53.212 09:19:30 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:08:53.212 09:19:30 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:08:53.212 09:19:30 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:08:53.212 09:19:30 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:08:53.212 09:19:30 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:08:53.212 09:19:30 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:08:53.212 09:19:30 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:08:53.212 09:19:30 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:08:53.213 09:19:30 -- common/autotest_common.sh@1543 -- # continue 00:08:53.213 09:19:30 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:08:53.213 09:19:30 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:53.213 09:19:30 -- common/autotest_common.sh@10 -- # set +x 00:08:53.213 09:19:30 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:08:53.213 09:19:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:53.213 09:19:30 -- common/autotest_common.sh@10 -- # set +x 00:08:53.213 09:19:30 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:54.151 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:54.151 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:54.151 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:54.151 09:19:31 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:08:54.151 09:19:31 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:54.151 09:19:31 -- common/autotest_common.sh@10 -- # set +x 00:08:54.439 09:19:31 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:08:54.439 09:19:31 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:08:54.439 09:19:31 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:08:54.439 09:19:31 -- common/autotest_common.sh@1563 -- # bdfs=() 00:08:54.439 09:19:31 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:08:54.439 09:19:31 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:08:54.439 09:19:31 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:08:54.439 09:19:31 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:08:54.439 09:19:31 -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:54.439 09:19:31 -- common/autotest_common.sh@1498 -- # local bdfs 00:08:54.439 09:19:31 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:54.439 09:19:31 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:54.439 09:19:31 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:54.439 09:19:31 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:08:54.439 09:19:31 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:08:54.439 09:19:31 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:08:54.439 09:19:31 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:08:54.439 09:19:31 -- common/autotest_common.sh@1566 -- # device=0x0010 00:08:54.439 09:19:31 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:54.439 09:19:31 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:08:54.439 09:19:31 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:08:54.439 09:19:31 -- common/autotest_common.sh@1566 -- # device=0x0010 00:08:54.439 09:19:31 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:54.439 09:19:31 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:08:54.439 09:19:31 -- common/autotest_common.sh@1572 -- # return 0 00:08:54.439 09:19:32 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:08:54.439 09:19:32 -- common/autotest_common.sh@1580 -- # return 0 00:08:54.439 09:19:32 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:08:54.439 09:19:32 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:08:54.439 09:19:32 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:54.439 09:19:32 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:54.439 09:19:32 -- spdk/autotest.sh@149 -- # timing_enter lib 00:08:54.439 09:19:32 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:54.439 09:19:32 -- common/autotest_common.sh@10 -- # set +x 00:08:54.439 09:19:32 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:08:54.439 09:19:32 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:08:54.439 09:19:32 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:08:54.439 09:19:32 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:54.439 09:19:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:54.439 09:19:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:54.439 09:19:32 -- common/autotest_common.sh@10 -- # set +x 00:08:54.439 ************************************ 00:08:54.439 START TEST env 00:08:54.439 ************************************ 00:08:54.439 09:19:32 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:54.439 * Looking for test storage... 00:08:54.439 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:08:54.439 09:19:32 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:54.439 09:19:32 env -- common/autotest_common.sh@1711 -- # lcov --version 00:08:54.439 09:19:32 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:54.698 09:19:32 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:54.698 09:19:32 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:54.698 09:19:32 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:54.698 09:19:32 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:54.698 09:19:32 env -- scripts/common.sh@336 -- # IFS=.-: 00:08:54.698 09:19:32 env -- scripts/common.sh@336 -- # read -ra ver1 00:08:54.698 09:19:32 env -- scripts/common.sh@337 -- # IFS=.-: 00:08:54.698 09:19:32 env -- scripts/common.sh@337 -- # read -ra ver2 00:08:54.698 09:19:32 env -- scripts/common.sh@338 -- # local 'op=<' 00:08:54.698 09:19:32 env -- scripts/common.sh@340 -- # ver1_l=2 00:08:54.698 09:19:32 env -- scripts/common.sh@341 -- # ver2_l=1 00:08:54.698 09:19:32 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:54.698 09:19:32 env -- scripts/common.sh@344 -- # case "$op" in 00:08:54.698 09:19:32 env -- scripts/common.sh@345 -- # : 1 00:08:54.698 09:19:32 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:54.698 09:19:32 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:54.698 09:19:32 env -- scripts/common.sh@365 -- # decimal 1 00:08:54.698 09:19:32 env -- scripts/common.sh@353 -- # local d=1 00:08:54.698 09:19:32 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:54.698 09:19:32 env -- scripts/common.sh@355 -- # echo 1 00:08:54.698 09:19:32 env -- scripts/common.sh@365 -- # ver1[v]=1 00:08:54.698 09:19:32 env -- scripts/common.sh@366 -- # decimal 2 00:08:54.698 09:19:32 env -- scripts/common.sh@353 -- # local d=2 00:08:54.698 09:19:32 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:54.698 09:19:32 env -- scripts/common.sh@355 -- # echo 2 00:08:54.698 09:19:32 env -- scripts/common.sh@366 -- # ver2[v]=2 00:08:54.698 09:19:32 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:54.698 09:19:32 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:54.698 09:19:32 env -- scripts/common.sh@368 -- # return 0 00:08:54.698 09:19:32 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:54.698 09:19:32 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:54.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.698 --rc genhtml_branch_coverage=1 00:08:54.698 --rc genhtml_function_coverage=1 00:08:54.698 --rc genhtml_legend=1 00:08:54.698 --rc geninfo_all_blocks=1 00:08:54.698 --rc geninfo_unexecuted_blocks=1 00:08:54.698 00:08:54.698 ' 00:08:54.698 09:19:32 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:54.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.698 --rc genhtml_branch_coverage=1 00:08:54.698 --rc genhtml_function_coverage=1 00:08:54.698 --rc genhtml_legend=1 00:08:54.698 --rc geninfo_all_blocks=1 00:08:54.698 --rc geninfo_unexecuted_blocks=1 00:08:54.698 00:08:54.698 ' 00:08:54.698 09:19:32 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:54.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.698 --rc genhtml_branch_coverage=1 00:08:54.698 --rc genhtml_function_coverage=1 00:08:54.698 --rc genhtml_legend=1 00:08:54.698 --rc geninfo_all_blocks=1 00:08:54.698 --rc geninfo_unexecuted_blocks=1 00:08:54.698 00:08:54.698 ' 00:08:54.698 09:19:32 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:54.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.698 --rc genhtml_branch_coverage=1 00:08:54.698 --rc genhtml_function_coverage=1 00:08:54.698 --rc genhtml_legend=1 00:08:54.698 --rc geninfo_all_blocks=1 00:08:54.698 --rc geninfo_unexecuted_blocks=1 00:08:54.698 00:08:54.698 ' 00:08:54.698 09:19:32 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:54.698 09:19:32 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:54.698 09:19:32 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:54.698 09:19:32 env -- common/autotest_common.sh@10 -- # set +x 00:08:54.698 ************************************ 00:08:54.698 START TEST env_memory 00:08:54.698 ************************************ 00:08:54.698 09:19:32 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:54.698 00:08:54.698 00:08:54.698 CUnit - A unit testing framework for C - Version 2.1-3 00:08:54.698 http://cunit.sourceforge.net/ 00:08:54.698 00:08:54.698 00:08:54.698 Suite: memory 00:08:54.698 Test: alloc and free memory map ...[2024-12-09 09:19:32.289078] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:08:54.698 passed 00:08:54.698 Test: mem map translation ...[2024-12-09 09:19:32.309721] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:08:54.698 [2024-12-09 09:19:32.309926] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:08:54.699 [2024-12-09 09:19:32.310112] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:08:54.699 [2024-12-09 09:19:32.310285] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:08:54.699 passed 00:08:54.699 Test: mem map registration ...[2024-12-09 09:19:32.348804] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:08:54.699 [2024-12-09 09:19:32.349024] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:08:54.699 passed 00:08:54.699 Test: mem map adjacent registrations ...passed 00:08:54.699 00:08:54.699 Run Summary: Type Total Ran Passed Failed Inactive 00:08:54.699 suites 1 1 n/a 0 0 00:08:54.699 tests 4 4 4 0 0 00:08:54.699 asserts 152 152 152 0 n/a 00:08:54.699 00:08:54.699 Elapsed time = 0.138 seconds 00:08:54.699 00:08:54.699 real 0m0.162s 00:08:54.699 user 0m0.142s 00:08:54.699 sys 0m0.013s 00:08:54.699 09:19:32 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:54.699 09:19:32 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:08:54.699 ************************************ 00:08:54.699 END TEST env_memory 00:08:54.699 ************************************ 00:08:54.958 09:19:32 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:54.958 09:19:32 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:54.958 09:19:32 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:54.958 09:19:32 env -- common/autotest_common.sh@10 -- # set +x 00:08:54.958 ************************************ 00:08:54.958 START TEST env_vtophys 00:08:54.958 ************************************ 00:08:54.958 09:19:32 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:54.958 EAL: lib.eal log level changed from notice to debug 00:08:54.958 EAL: Detected lcore 0 as core 0 on socket 0 00:08:54.958 EAL: Detected lcore 1 as core 0 on socket 0 00:08:54.958 EAL: Detected lcore 2 as core 0 on socket 0 00:08:54.958 EAL: Detected lcore 3 as core 0 on socket 0 00:08:54.958 EAL: Detected lcore 4 as core 0 on socket 0 00:08:54.958 EAL: Detected lcore 5 as core 0 on socket 0 00:08:54.958 EAL: Detected lcore 6 as core 0 on socket 0 00:08:54.958 EAL: Detected lcore 7 as core 0 on socket 0 00:08:54.958 EAL: Detected lcore 8 as core 0 on socket 0 00:08:54.958 EAL: Detected lcore 9 as core 0 on socket 0 00:08:54.958 EAL: Maximum logical cores by configuration: 128 00:08:54.958 EAL: Detected CPU lcores: 10 00:08:54.958 EAL: Detected NUMA nodes: 1 00:08:54.958 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:08:54.958 EAL: Detected shared linkage of DPDK 00:08:54.958 EAL: No shared files mode enabled, IPC will be disabled 00:08:54.958 EAL: Selected IOVA mode 'PA' 00:08:54.958 EAL: Probing VFIO support... 00:08:54.959 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:08:54.959 EAL: VFIO modules not loaded, skipping VFIO support... 00:08:54.959 EAL: Ask a virtual area of 0x2e000 bytes 00:08:54.959 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:08:54.959 EAL: Setting up physically contiguous memory... 00:08:54.959 EAL: Setting maximum number of open files to 524288 00:08:54.959 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:08:54.959 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:08:54.959 EAL: Ask a virtual area of 0x61000 bytes 00:08:54.959 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:08:54.959 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:54.959 EAL: Ask a virtual area of 0x400000000 bytes 00:08:54.959 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:08:54.959 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:08:54.959 EAL: Ask a virtual area of 0x61000 bytes 00:08:54.959 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:08:54.959 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:54.959 EAL: Ask a virtual area of 0x400000000 bytes 00:08:54.959 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:08:54.959 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:08:54.959 EAL: Ask a virtual area of 0x61000 bytes 00:08:54.959 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:08:54.959 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:54.959 EAL: Ask a virtual area of 0x400000000 bytes 00:08:54.959 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:08:54.959 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:08:54.959 EAL: Ask a virtual area of 0x61000 bytes 00:08:54.959 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:08:54.959 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:54.959 EAL: Ask a virtual area of 0x400000000 bytes 00:08:54.959 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:08:54.959 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:08:54.959 EAL: Hugepages will be freed exactly as allocated. 00:08:54.959 EAL: No shared files mode enabled, IPC is disabled 00:08:54.959 EAL: No shared files mode enabled, IPC is disabled 00:08:54.959 EAL: TSC frequency is ~2490000 KHz 00:08:54.959 EAL: Main lcore 0 is ready (tid=7f6655a87a00;cpuset=[0]) 00:08:54.959 EAL: Trying to obtain current memory policy. 00:08:54.959 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:54.959 EAL: Restoring previous memory policy: 0 00:08:54.959 EAL: request: mp_malloc_sync 00:08:54.959 EAL: No shared files mode enabled, IPC is disabled 00:08:54.959 EAL: Heap on socket 0 was expanded by 2MB 00:08:54.959 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:08:54.959 EAL: No PCI address specified using 'addr=' in: bus=pci 00:08:54.959 EAL: Mem event callback 'spdk:(nil)' registered 00:08:54.959 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:08:54.959 00:08:54.959 00:08:54.959 CUnit - A unit testing framework for C - Version 2.1-3 00:08:54.959 http://cunit.sourceforge.net/ 00:08:54.959 00:08:54.959 00:08:54.959 Suite: components_suite 00:08:54.959 Test: vtophys_malloc_test ...passed 00:08:54.959 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:08:54.959 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:54.959 EAL: Restoring previous memory policy: 4 00:08:54.959 EAL: Calling mem event callback 'spdk:(nil)' 00:08:54.959 EAL: request: mp_malloc_sync 00:08:54.959 EAL: No shared files mode enabled, IPC is disabled 00:08:54.959 EAL: Heap on socket 0 was expanded by 4MB 00:08:54.959 EAL: Calling mem event callback 'spdk:(nil)' 00:08:54.959 EAL: request: mp_malloc_sync 00:08:54.959 EAL: No shared files mode enabled, IPC is disabled 00:08:54.959 EAL: Heap on socket 0 was shrunk by 4MB 00:08:54.959 EAL: Trying to obtain current memory policy. 00:08:54.959 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:54.959 EAL: Restoring previous memory policy: 4 00:08:54.959 EAL: Calling mem event callback 'spdk:(nil)' 00:08:54.959 EAL: request: mp_malloc_sync 00:08:54.959 EAL: No shared files mode enabled, IPC is disabled 00:08:54.959 EAL: Heap on socket 0 was expanded by 6MB 00:08:54.959 EAL: Calling mem event callback 'spdk:(nil)' 00:08:54.959 EAL: request: mp_malloc_sync 00:08:54.959 EAL: No shared files mode enabled, IPC is disabled 00:08:54.959 EAL: Heap on socket 0 was shrunk by 6MB 00:08:54.959 EAL: Trying to obtain current memory policy. 00:08:54.959 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:54.959 EAL: Restoring previous memory policy: 4 00:08:54.959 EAL: Calling mem event callback 'spdk:(nil)' 00:08:54.959 EAL: request: mp_malloc_sync 00:08:54.959 EAL: No shared files mode enabled, IPC is disabled 00:08:54.959 EAL: Heap on socket 0 was expanded by 10MB 00:08:54.959 EAL: Calling mem event callback 'spdk:(nil)' 00:08:54.959 EAL: request: mp_malloc_sync 00:08:54.959 EAL: No shared files mode enabled, IPC is disabled 00:08:54.959 EAL: Heap on socket 0 was shrunk by 10MB 00:08:54.959 EAL: Trying to obtain current memory policy. 00:08:54.959 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:54.959 EAL: Restoring previous memory policy: 4 00:08:54.959 EAL: Calling mem event callback 'spdk:(nil)' 00:08:54.959 EAL: request: mp_malloc_sync 00:08:54.959 EAL: No shared files mode enabled, IPC is disabled 00:08:54.959 EAL: Heap on socket 0 was expanded by 18MB 00:08:54.959 EAL: Calling mem event callback 'spdk:(nil)' 00:08:54.959 EAL: request: mp_malloc_sync 00:08:54.959 EAL: No shared files mode enabled, IPC is disabled 00:08:54.959 EAL: Heap on socket 0 was shrunk by 18MB 00:08:54.959 EAL: Trying to obtain current memory policy. 00:08:54.959 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:54.959 EAL: Restoring previous memory policy: 4 00:08:54.959 EAL: Calling mem event callback 'spdk:(nil)' 00:08:54.959 EAL: request: mp_malloc_sync 00:08:54.959 EAL: No shared files mode enabled, IPC is disabled 00:08:54.959 EAL: Heap on socket 0 was expanded by 34MB 00:08:54.959 EAL: Calling mem event callback 'spdk:(nil)' 00:08:54.959 EAL: request: mp_malloc_sync 00:08:54.959 EAL: No shared files mode enabled, IPC is disabled 00:08:54.959 EAL: Heap on socket 0 was shrunk by 34MB 00:08:54.959 EAL: Trying to obtain current memory policy. 00:08:54.959 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:54.959 EAL: Restoring previous memory policy: 4 00:08:54.959 EAL: Calling mem event callback 'spdk:(nil)' 00:08:54.959 EAL: request: mp_malloc_sync 00:08:54.959 EAL: No shared files mode enabled, IPC is disabled 00:08:54.959 EAL: Heap on socket 0 was expanded by 66MB 00:08:55.218 EAL: Calling mem event callback 'spdk:(nil)' 00:08:55.218 EAL: request: mp_malloc_sync 00:08:55.218 EAL: No shared files mode enabled, IPC is disabled 00:08:55.218 EAL: Heap on socket 0 was shrunk by 66MB 00:08:55.218 EAL: Trying to obtain current memory policy. 00:08:55.218 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:55.218 EAL: Restoring previous memory policy: 4 00:08:55.218 EAL: Calling mem event callback 'spdk:(nil)' 00:08:55.218 EAL: request: mp_malloc_sync 00:08:55.218 EAL: No shared files mode enabled, IPC is disabled 00:08:55.218 EAL: Heap on socket 0 was expanded by 130MB 00:08:55.218 EAL: Calling mem event callback 'spdk:(nil)' 00:08:55.218 EAL: request: mp_malloc_sync 00:08:55.218 EAL: No shared files mode enabled, IPC is disabled 00:08:55.218 EAL: Heap on socket 0 was shrunk by 130MB 00:08:55.218 EAL: Trying to obtain current memory policy. 00:08:55.218 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:55.218 EAL: Restoring previous memory policy: 4 00:08:55.218 EAL: Calling mem event callback 'spdk:(nil)' 00:08:55.218 EAL: request: mp_malloc_sync 00:08:55.218 EAL: No shared files mode enabled, IPC is disabled 00:08:55.218 EAL: Heap on socket 0 was expanded by 258MB 00:08:55.218 EAL: Calling mem event callback 'spdk:(nil)' 00:08:55.218 EAL: request: mp_malloc_sync 00:08:55.218 EAL: No shared files mode enabled, IPC is disabled 00:08:55.218 EAL: Heap on socket 0 was shrunk by 258MB 00:08:55.218 EAL: Trying to obtain current memory policy. 00:08:55.218 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:55.477 EAL: Restoring previous memory policy: 4 00:08:55.477 EAL: Calling mem event callback 'spdk:(nil)' 00:08:55.477 EAL: request: mp_malloc_sync 00:08:55.477 EAL: No shared files mode enabled, IPC is disabled 00:08:55.477 EAL: Heap on socket 0 was expanded by 514MB 00:08:55.477 EAL: Calling mem event callback 'spdk:(nil)' 00:08:55.477 EAL: request: mp_malloc_sync 00:08:55.477 EAL: No shared files mode enabled, IPC is disabled 00:08:55.477 EAL: Heap on socket 0 was shrunk by 514MB 00:08:55.477 EAL: Trying to obtain current memory policy. 00:08:55.477 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:55.736 EAL: Restoring previous memory policy: 4 00:08:55.736 EAL: Calling mem event callback 'spdk:(nil)' 00:08:55.736 EAL: request: mp_malloc_sync 00:08:55.736 EAL: No shared files mode enabled, IPC is disabled 00:08:55.736 EAL: Heap on socket 0 was expanded by 1026MB 00:08:55.994 EAL: Calling mem event callback 'spdk:(nil)' 00:08:55.994 passed 00:08:55.994 00:08:55.994 Run Summary: Type Total Ran Passed Failed Inactive 00:08:55.994 suites 1 1 n/a 0 0 00:08:55.994 tests 2 2 2 0 0 00:08:55.994 asserts 5659 5659 5659 0 n/a 00:08:55.994 00:08:55.994 Elapsed time = 0.995 seconds 00:08:55.994 EAL: request: mp_malloc_sync 00:08:55.994 EAL: No shared files mode enabled, IPC is disabled 00:08:55.994 EAL: Heap on socket 0 was shrunk by 1026MB 00:08:55.994 EAL: Calling mem event callback 'spdk:(nil)' 00:08:55.994 EAL: request: mp_malloc_sync 00:08:55.994 EAL: No shared files mode enabled, IPC is disabled 00:08:55.994 EAL: Heap on socket 0 was shrunk by 2MB 00:08:55.994 EAL: No shared files mode enabled, IPC is disabled 00:08:55.994 EAL: No shared files mode enabled, IPC is disabled 00:08:55.994 EAL: No shared files mode enabled, IPC is disabled 00:08:55.994 00:08:55.994 real 0m1.201s 00:08:55.994 user 0m0.654s 00:08:55.994 sys 0m0.415s 00:08:55.994 09:19:33 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:55.994 09:19:33 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:08:55.994 ************************************ 00:08:55.994 END TEST env_vtophys 00:08:55.994 ************************************ 00:08:56.254 09:19:33 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:56.254 09:19:33 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:56.254 09:19:33 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.254 09:19:33 env -- common/autotest_common.sh@10 -- # set +x 00:08:56.254 ************************************ 00:08:56.254 START TEST env_pci 00:08:56.254 ************************************ 00:08:56.254 09:19:33 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:56.254 00:08:56.254 00:08:56.254 CUnit - A unit testing framework for C - Version 2.1-3 00:08:56.254 http://cunit.sourceforge.net/ 00:08:56.254 00:08:56.254 00:08:56.254 Suite: pci 00:08:56.254 Test: pci_hook ...[2024-12-09 09:19:33.760381] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56577 has claimed it 00:08:56.254 passed 00:08:56.254 00:08:56.254 Run Summary: Type Total Ran Passed Failed Inactive 00:08:56.254 suites 1 1 n/a 0 0 00:08:56.254 tests 1 1 1 0 0 00:08:56.254 asserts 25 25 25 0 n/a 00:08:56.254 00:08:56.254 Elapsed time = 0.003 seconds 00:08:56.254 EAL: Cannot find device (10000:00:01.0) 00:08:56.254 EAL: Failed to attach device on primary process 00:08:56.254 00:08:56.254 real 0m0.030s 00:08:56.254 user 0m0.013s 00:08:56.254 sys 0m0.016s 00:08:56.254 09:19:33 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.254 ************************************ 00:08:56.254 END TEST env_pci 00:08:56.254 09:19:33 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:08:56.254 ************************************ 00:08:56.254 09:19:33 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:08:56.254 09:19:33 env -- env/env.sh@15 -- # uname 00:08:56.254 09:19:33 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:08:56.254 09:19:33 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:08:56.254 09:19:33 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:56.254 09:19:33 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:56.254 09:19:33 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.254 09:19:33 env -- common/autotest_common.sh@10 -- # set +x 00:08:56.254 ************************************ 00:08:56.254 START TEST env_dpdk_post_init 00:08:56.254 ************************************ 00:08:56.254 09:19:33 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:56.254 EAL: Detected CPU lcores: 10 00:08:56.254 EAL: Detected NUMA nodes: 1 00:08:56.254 EAL: Detected shared linkage of DPDK 00:08:56.254 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:56.254 EAL: Selected IOVA mode 'PA' 00:08:56.513 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:56.513 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:08:56.513 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:08:56.513 Starting DPDK initialization... 00:08:56.513 Starting SPDK post initialization... 00:08:56.513 SPDK NVMe probe 00:08:56.513 Attaching to 0000:00:10.0 00:08:56.513 Attaching to 0000:00:11.0 00:08:56.513 Attached to 0000:00:10.0 00:08:56.513 Attached to 0000:00:11.0 00:08:56.513 Cleaning up... 00:08:56.513 00:08:56.513 real 0m0.207s 00:08:56.513 user 0m0.061s 00:08:56.513 sys 0m0.045s 00:08:56.513 ************************************ 00:08:56.513 END TEST env_dpdk_post_init 00:08:56.513 ************************************ 00:08:56.513 09:19:34 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.513 09:19:34 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:08:56.513 09:19:34 env -- env/env.sh@26 -- # uname 00:08:56.513 09:19:34 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:08:56.513 09:19:34 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:56.513 09:19:34 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:56.513 09:19:34 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.513 09:19:34 env -- common/autotest_common.sh@10 -- # set +x 00:08:56.513 ************************************ 00:08:56.513 START TEST env_mem_callbacks 00:08:56.513 ************************************ 00:08:56.513 09:19:34 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:56.513 EAL: Detected CPU lcores: 10 00:08:56.513 EAL: Detected NUMA nodes: 1 00:08:56.513 EAL: Detected shared linkage of DPDK 00:08:56.513 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:56.513 EAL: Selected IOVA mode 'PA' 00:08:56.772 00:08:56.772 00:08:56.772 CUnit - A unit testing framework for C - Version 2.1-3 00:08:56.772 http://cunit.sourceforge.net/ 00:08:56.772 00:08:56.772 00:08:56.772 Suite: memory 00:08:56.772 Test: test ... 00:08:56.772 register 0x200000200000 2097152 00:08:56.772 malloc 3145728 00:08:56.772 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:56.772 register 0x200000400000 4194304 00:08:56.772 buf 0x200000500000 len 3145728 PASSED 00:08:56.772 malloc 64 00:08:56.772 buf 0x2000004fff40 len 64 PASSED 00:08:56.772 malloc 4194304 00:08:56.772 register 0x200000800000 6291456 00:08:56.772 buf 0x200000a00000 len 4194304 PASSED 00:08:56.772 free 0x200000500000 3145728 00:08:56.772 free 0x2000004fff40 64 00:08:56.772 unregister 0x200000400000 4194304 PASSED 00:08:56.772 free 0x200000a00000 4194304 00:08:56.772 unregister 0x200000800000 6291456 PASSED 00:08:56.772 malloc 8388608 00:08:56.772 register 0x200000400000 10485760 00:08:56.772 buf 0x200000600000 len 8388608 PASSED 00:08:56.772 free 0x200000600000 8388608 00:08:56.772 unregister 0x200000400000 10485760 PASSED 00:08:56.772 passed 00:08:56.772 00:08:56.772 Run Summary: Type Total Ran Passed Failed Inactive 00:08:56.772 suites 1 1 n/a 0 0 00:08:56.772 tests 1 1 1 0 0 00:08:56.772 asserts 15 15 15 0 n/a 00:08:56.772 00:08:56.772 Elapsed time = 0.006 seconds 00:08:56.772 00:08:56.772 real 0m0.153s 00:08:56.772 user 0m0.023s 00:08:56.772 sys 0m0.029s 00:08:56.772 09:19:34 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.772 09:19:34 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:08:56.772 ************************************ 00:08:56.772 END TEST env_mem_callbacks 00:08:56.772 ************************************ 00:08:56.772 00:08:56.772 real 0m2.313s 00:08:56.772 user 0m1.124s 00:08:56.772 sys 0m0.849s 00:08:56.772 09:19:34 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.772 09:19:34 env -- common/autotest_common.sh@10 -- # set +x 00:08:56.772 ************************************ 00:08:56.772 END TEST env 00:08:56.772 ************************************ 00:08:56.772 09:19:34 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:56.772 09:19:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:56.772 09:19:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.772 09:19:34 -- common/autotest_common.sh@10 -- # set +x 00:08:56.772 ************************************ 00:08:56.772 START TEST rpc 00:08:56.772 ************************************ 00:08:56.772 09:19:34 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:57.030 * Looking for test storage... 00:08:57.030 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:08:57.031 09:19:34 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:57.031 09:19:34 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:57.031 09:19:34 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:08:57.031 09:19:34 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:57.031 09:19:34 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:57.031 09:19:34 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:57.031 09:19:34 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:57.031 09:19:34 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:57.031 09:19:34 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:57.031 09:19:34 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:57.031 09:19:34 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:57.031 09:19:34 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:57.031 09:19:34 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:57.031 09:19:34 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:57.031 09:19:34 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:57.031 09:19:34 rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:57.031 09:19:34 rpc -- scripts/common.sh@345 -- # : 1 00:08:57.031 09:19:34 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:57.031 09:19:34 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:57.031 09:19:34 rpc -- scripts/common.sh@365 -- # decimal 1 00:08:57.031 09:19:34 rpc -- scripts/common.sh@353 -- # local d=1 00:08:57.031 09:19:34 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:57.031 09:19:34 rpc -- scripts/common.sh@355 -- # echo 1 00:08:57.031 09:19:34 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:57.031 09:19:34 rpc -- scripts/common.sh@366 -- # decimal 2 00:08:57.031 09:19:34 rpc -- scripts/common.sh@353 -- # local d=2 00:08:57.031 09:19:34 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:57.031 09:19:34 rpc -- scripts/common.sh@355 -- # echo 2 00:08:57.031 09:19:34 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:57.031 09:19:34 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:57.031 09:19:34 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:57.031 09:19:34 rpc -- scripts/common.sh@368 -- # return 0 00:08:57.031 09:19:34 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:57.031 09:19:34 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:57.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.031 --rc genhtml_branch_coverage=1 00:08:57.031 --rc genhtml_function_coverage=1 00:08:57.031 --rc genhtml_legend=1 00:08:57.031 --rc geninfo_all_blocks=1 00:08:57.031 --rc geninfo_unexecuted_blocks=1 00:08:57.031 00:08:57.031 ' 00:08:57.031 09:19:34 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:57.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.031 --rc genhtml_branch_coverage=1 00:08:57.031 --rc genhtml_function_coverage=1 00:08:57.031 --rc genhtml_legend=1 00:08:57.031 --rc geninfo_all_blocks=1 00:08:57.031 --rc geninfo_unexecuted_blocks=1 00:08:57.031 00:08:57.031 ' 00:08:57.031 09:19:34 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:57.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.031 --rc genhtml_branch_coverage=1 00:08:57.031 --rc genhtml_function_coverage=1 00:08:57.031 --rc genhtml_legend=1 00:08:57.031 --rc geninfo_all_blocks=1 00:08:57.031 --rc geninfo_unexecuted_blocks=1 00:08:57.031 00:08:57.031 ' 00:08:57.031 09:19:34 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:57.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.031 --rc genhtml_branch_coverage=1 00:08:57.031 --rc genhtml_function_coverage=1 00:08:57.031 --rc genhtml_legend=1 00:08:57.031 --rc geninfo_all_blocks=1 00:08:57.031 --rc geninfo_unexecuted_blocks=1 00:08:57.031 00:08:57.031 ' 00:08:57.031 09:19:34 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:08:57.031 09:19:34 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56694 00:08:57.031 09:19:34 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:57.031 09:19:34 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56694 00:08:57.031 09:19:34 rpc -- common/autotest_common.sh@835 -- # '[' -z 56694 ']' 00:08:57.031 09:19:34 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.031 09:19:34 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:57.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.031 09:19:34 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.031 09:19:34 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:57.031 09:19:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.031 [2024-12-09 09:19:34.696206] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:08:57.031 [2024-12-09 09:19:34.696300] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56694 ] 00:08:57.290 [2024-12-09 09:19:34.965576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.563 [2024-12-09 09:19:35.028199] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:08:57.563 [2024-12-09 09:19:35.028263] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56694' to capture a snapshot of events at runtime. 00:08:57.563 [2024-12-09 09:19:35.028273] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:57.563 [2024-12-09 09:19:35.028282] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:57.563 [2024-12-09 09:19:35.028288] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56694 for offline analysis/debug. 00:08:57.563 [2024-12-09 09:19:35.028628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.563 [2024-12-09 09:19:35.086931] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:58.155 09:19:35 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:58.155 09:19:35 rpc -- common/autotest_common.sh@868 -- # return 0 00:08:58.155 09:19:35 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:58.155 09:19:35 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:58.155 09:19:35 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:08:58.155 09:19:35 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:08:58.155 09:19:35 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:58.155 09:19:35 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:58.155 09:19:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:58.155 ************************************ 00:08:58.155 START TEST rpc_integrity 00:08:58.155 ************************************ 00:08:58.155 09:19:35 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:08:58.155 09:19:35 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:58.155 09:19:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.155 09:19:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:58.155 09:19:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.155 09:19:35 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:58.155 09:19:35 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:58.155 09:19:35 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:58.155 09:19:35 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:58.155 09:19:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.155 09:19:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:58.155 09:19:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.155 09:19:35 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:08:58.155 09:19:35 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:58.155 09:19:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.155 09:19:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:58.155 09:19:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.155 09:19:35 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:58.155 { 00:08:58.155 "name": "Malloc0", 00:08:58.155 "aliases": [ 00:08:58.155 "51e64417-b174-489d-a65f-1f466c7f0adc" 00:08:58.155 ], 00:08:58.155 "product_name": "Malloc disk", 00:08:58.155 "block_size": 512, 00:08:58.155 "num_blocks": 16384, 00:08:58.155 "uuid": "51e64417-b174-489d-a65f-1f466c7f0adc", 00:08:58.155 "assigned_rate_limits": { 00:08:58.155 "rw_ios_per_sec": 0, 00:08:58.155 "rw_mbytes_per_sec": 0, 00:08:58.155 "r_mbytes_per_sec": 0, 00:08:58.155 "w_mbytes_per_sec": 0 00:08:58.155 }, 00:08:58.155 "claimed": false, 00:08:58.155 "zoned": false, 00:08:58.155 "supported_io_types": { 00:08:58.155 "read": true, 00:08:58.155 "write": true, 00:08:58.155 "unmap": true, 00:08:58.155 "flush": true, 00:08:58.155 "reset": true, 00:08:58.155 "nvme_admin": false, 00:08:58.155 "nvme_io": false, 00:08:58.155 "nvme_io_md": false, 00:08:58.155 "write_zeroes": true, 00:08:58.155 "zcopy": true, 00:08:58.155 "get_zone_info": false, 00:08:58.155 "zone_management": false, 00:08:58.155 "zone_append": false, 00:08:58.155 "compare": false, 00:08:58.155 "compare_and_write": false, 00:08:58.155 "abort": true, 00:08:58.155 "seek_hole": false, 00:08:58.155 "seek_data": false, 00:08:58.155 "copy": true, 00:08:58.155 "nvme_iov_md": false 00:08:58.155 }, 00:08:58.155 "memory_domains": [ 00:08:58.155 { 00:08:58.155 "dma_device_id": "system", 00:08:58.155 "dma_device_type": 1 00:08:58.155 }, 00:08:58.155 { 00:08:58.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.155 "dma_device_type": 2 00:08:58.155 } 00:08:58.155 ], 00:08:58.155 "driver_specific": {} 00:08:58.155 } 00:08:58.155 ]' 00:08:58.155 09:19:35 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:58.155 09:19:35 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:58.155 09:19:35 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:08:58.155 09:19:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.155 09:19:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:58.155 [2024-12-09 09:19:35.746541] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:08:58.155 [2024-12-09 09:19:35.746605] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:58.155 [2024-12-09 09:19:35.746626] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1408cb0 00:08:58.155 [2024-12-09 09:19:35.746636] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:58.155 [2024-12-09 09:19:35.748346] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:58.155 [2024-12-09 09:19:35.748380] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:58.155 Passthru0 00:08:58.155 09:19:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.155 09:19:35 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:58.155 09:19:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.155 09:19:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:58.155 09:19:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.155 09:19:35 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:58.155 { 00:08:58.155 "name": "Malloc0", 00:08:58.155 "aliases": [ 00:08:58.155 "51e64417-b174-489d-a65f-1f466c7f0adc" 00:08:58.155 ], 00:08:58.155 "product_name": "Malloc disk", 00:08:58.155 "block_size": 512, 00:08:58.155 "num_blocks": 16384, 00:08:58.155 "uuid": "51e64417-b174-489d-a65f-1f466c7f0adc", 00:08:58.155 "assigned_rate_limits": { 00:08:58.155 "rw_ios_per_sec": 0, 00:08:58.155 "rw_mbytes_per_sec": 0, 00:08:58.155 "r_mbytes_per_sec": 0, 00:08:58.155 "w_mbytes_per_sec": 0 00:08:58.155 }, 00:08:58.155 "claimed": true, 00:08:58.155 "claim_type": "exclusive_write", 00:08:58.155 "zoned": false, 00:08:58.155 "supported_io_types": { 00:08:58.155 "read": true, 00:08:58.155 "write": true, 00:08:58.155 "unmap": true, 00:08:58.155 "flush": true, 00:08:58.155 "reset": true, 00:08:58.155 "nvme_admin": false, 00:08:58.155 "nvme_io": false, 00:08:58.155 "nvme_io_md": false, 00:08:58.155 "write_zeroes": true, 00:08:58.155 "zcopy": true, 00:08:58.155 "get_zone_info": false, 00:08:58.155 "zone_management": false, 00:08:58.155 "zone_append": false, 00:08:58.155 "compare": false, 00:08:58.155 "compare_and_write": false, 00:08:58.155 "abort": true, 00:08:58.155 "seek_hole": false, 00:08:58.155 "seek_data": false, 00:08:58.155 "copy": true, 00:08:58.155 "nvme_iov_md": false 00:08:58.155 }, 00:08:58.155 "memory_domains": [ 00:08:58.155 { 00:08:58.155 "dma_device_id": "system", 00:08:58.155 "dma_device_type": 1 00:08:58.155 }, 00:08:58.155 { 00:08:58.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.155 "dma_device_type": 2 00:08:58.155 } 00:08:58.155 ], 00:08:58.155 "driver_specific": {} 00:08:58.155 }, 00:08:58.155 { 00:08:58.155 "name": "Passthru0", 00:08:58.155 "aliases": [ 00:08:58.155 "ea19f752-7acc-57e9-97c7-d722e66c9aea" 00:08:58.155 ], 00:08:58.155 "product_name": "passthru", 00:08:58.155 "block_size": 512, 00:08:58.155 "num_blocks": 16384, 00:08:58.155 "uuid": "ea19f752-7acc-57e9-97c7-d722e66c9aea", 00:08:58.155 "assigned_rate_limits": { 00:08:58.155 "rw_ios_per_sec": 0, 00:08:58.155 "rw_mbytes_per_sec": 0, 00:08:58.155 "r_mbytes_per_sec": 0, 00:08:58.155 "w_mbytes_per_sec": 0 00:08:58.155 }, 00:08:58.155 "claimed": false, 00:08:58.155 "zoned": false, 00:08:58.155 "supported_io_types": { 00:08:58.155 "read": true, 00:08:58.155 "write": true, 00:08:58.155 "unmap": true, 00:08:58.155 "flush": true, 00:08:58.155 "reset": true, 00:08:58.155 "nvme_admin": false, 00:08:58.155 "nvme_io": false, 00:08:58.155 "nvme_io_md": false, 00:08:58.155 "write_zeroes": true, 00:08:58.155 "zcopy": true, 00:08:58.155 "get_zone_info": false, 00:08:58.155 "zone_management": false, 00:08:58.156 "zone_append": false, 00:08:58.156 "compare": false, 00:08:58.156 "compare_and_write": false, 00:08:58.156 "abort": true, 00:08:58.156 "seek_hole": false, 00:08:58.156 "seek_data": false, 00:08:58.156 "copy": true, 00:08:58.156 "nvme_iov_md": false 00:08:58.156 }, 00:08:58.156 "memory_domains": [ 00:08:58.156 { 00:08:58.156 "dma_device_id": "system", 00:08:58.156 "dma_device_type": 1 00:08:58.156 }, 00:08:58.156 { 00:08:58.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.156 "dma_device_type": 2 00:08:58.156 } 00:08:58.156 ], 00:08:58.156 "driver_specific": { 00:08:58.156 "passthru": { 00:08:58.156 "name": "Passthru0", 00:08:58.156 "base_bdev_name": "Malloc0" 00:08:58.156 } 00:08:58.156 } 00:08:58.156 } 00:08:58.156 ]' 00:08:58.156 09:19:35 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:58.156 09:19:35 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:58.156 09:19:35 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:58.156 09:19:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.156 09:19:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:58.156 09:19:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.156 09:19:35 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:08:58.156 09:19:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.156 09:19:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:58.156 09:19:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.156 09:19:35 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:58.156 09:19:35 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.156 09:19:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:58.454 09:19:35 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.454 09:19:35 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:58.454 09:19:35 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:58.454 09:19:35 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:58.454 00:08:58.454 real 0m0.326s 00:08:58.454 user 0m0.186s 00:08:58.454 sys 0m0.061s 00:08:58.454 ************************************ 00:08:58.454 END TEST rpc_integrity 00:08:58.454 ************************************ 00:08:58.454 09:19:35 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:58.454 09:19:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:58.454 09:19:35 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:08:58.454 09:19:35 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:58.454 09:19:35 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:58.454 09:19:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:58.454 ************************************ 00:08:58.454 START TEST rpc_plugins 00:08:58.454 ************************************ 00:08:58.454 09:19:35 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:08:58.454 09:19:35 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:08:58.454 09:19:35 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.454 09:19:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:58.454 09:19:35 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.454 09:19:35 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:08:58.454 09:19:35 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:08:58.454 09:19:35 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.454 09:19:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:58.454 09:19:36 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.454 09:19:36 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:08:58.454 { 00:08:58.454 "name": "Malloc1", 00:08:58.454 "aliases": [ 00:08:58.454 "e74c65c6-b8f4-4953-903a-768f5fdc7adc" 00:08:58.454 ], 00:08:58.454 "product_name": "Malloc disk", 00:08:58.454 "block_size": 4096, 00:08:58.454 "num_blocks": 256, 00:08:58.454 "uuid": "e74c65c6-b8f4-4953-903a-768f5fdc7adc", 00:08:58.454 "assigned_rate_limits": { 00:08:58.454 "rw_ios_per_sec": 0, 00:08:58.454 "rw_mbytes_per_sec": 0, 00:08:58.454 "r_mbytes_per_sec": 0, 00:08:58.454 "w_mbytes_per_sec": 0 00:08:58.454 }, 00:08:58.454 "claimed": false, 00:08:58.454 "zoned": false, 00:08:58.454 "supported_io_types": { 00:08:58.454 "read": true, 00:08:58.454 "write": true, 00:08:58.454 "unmap": true, 00:08:58.454 "flush": true, 00:08:58.454 "reset": true, 00:08:58.454 "nvme_admin": false, 00:08:58.454 "nvme_io": false, 00:08:58.454 "nvme_io_md": false, 00:08:58.455 "write_zeroes": true, 00:08:58.455 "zcopy": true, 00:08:58.455 "get_zone_info": false, 00:08:58.455 "zone_management": false, 00:08:58.455 "zone_append": false, 00:08:58.455 "compare": false, 00:08:58.455 "compare_and_write": false, 00:08:58.455 "abort": true, 00:08:58.455 "seek_hole": false, 00:08:58.455 "seek_data": false, 00:08:58.455 "copy": true, 00:08:58.455 "nvme_iov_md": false 00:08:58.455 }, 00:08:58.455 "memory_domains": [ 00:08:58.455 { 00:08:58.455 "dma_device_id": "system", 00:08:58.455 "dma_device_type": 1 00:08:58.455 }, 00:08:58.455 { 00:08:58.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.455 "dma_device_type": 2 00:08:58.455 } 00:08:58.455 ], 00:08:58.455 "driver_specific": {} 00:08:58.455 } 00:08:58.455 ]' 00:08:58.455 09:19:36 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:08:58.455 09:19:36 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:08:58.455 09:19:36 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:08:58.455 09:19:36 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.455 09:19:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:58.455 09:19:36 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.455 09:19:36 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:08:58.455 09:19:36 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.455 09:19:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:58.455 09:19:36 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.455 09:19:36 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:08:58.455 09:19:36 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:08:58.455 09:19:36 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:08:58.455 00:08:58.455 real 0m0.157s 00:08:58.455 user 0m0.080s 00:08:58.455 sys 0m0.032s 00:08:58.455 09:19:36 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:58.455 ************************************ 00:08:58.455 END TEST rpc_plugins 00:08:58.455 ************************************ 00:08:58.455 09:19:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:58.713 09:19:36 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:08:58.713 09:19:36 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:58.713 09:19:36 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:58.713 09:19:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:58.713 ************************************ 00:08:58.713 START TEST rpc_trace_cmd_test 00:08:58.713 ************************************ 00:08:58.713 09:19:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:08:58.713 09:19:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:08:58.713 09:19:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:08:58.713 09:19:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.713 09:19:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.713 09:19:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.713 09:19:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:08:58.713 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56694", 00:08:58.713 "tpoint_group_mask": "0x8", 00:08:58.713 "iscsi_conn": { 00:08:58.713 "mask": "0x2", 00:08:58.713 "tpoint_mask": "0x0" 00:08:58.713 }, 00:08:58.713 "scsi": { 00:08:58.713 "mask": "0x4", 00:08:58.713 "tpoint_mask": "0x0" 00:08:58.713 }, 00:08:58.713 "bdev": { 00:08:58.713 "mask": "0x8", 00:08:58.713 "tpoint_mask": "0xffffffffffffffff" 00:08:58.713 }, 00:08:58.713 "nvmf_rdma": { 00:08:58.713 "mask": "0x10", 00:08:58.713 "tpoint_mask": "0x0" 00:08:58.713 }, 00:08:58.713 "nvmf_tcp": { 00:08:58.713 "mask": "0x20", 00:08:58.713 "tpoint_mask": "0x0" 00:08:58.713 }, 00:08:58.713 "ftl": { 00:08:58.713 "mask": "0x40", 00:08:58.713 "tpoint_mask": "0x0" 00:08:58.713 }, 00:08:58.713 "blobfs": { 00:08:58.713 "mask": "0x80", 00:08:58.713 "tpoint_mask": "0x0" 00:08:58.713 }, 00:08:58.713 "dsa": { 00:08:58.713 "mask": "0x200", 00:08:58.713 "tpoint_mask": "0x0" 00:08:58.713 }, 00:08:58.713 "thread": { 00:08:58.713 "mask": "0x400", 00:08:58.713 "tpoint_mask": "0x0" 00:08:58.713 }, 00:08:58.713 "nvme_pcie": { 00:08:58.713 "mask": "0x800", 00:08:58.713 "tpoint_mask": "0x0" 00:08:58.713 }, 00:08:58.713 "iaa": { 00:08:58.713 "mask": "0x1000", 00:08:58.714 "tpoint_mask": "0x0" 00:08:58.714 }, 00:08:58.714 "nvme_tcp": { 00:08:58.714 "mask": "0x2000", 00:08:58.714 "tpoint_mask": "0x0" 00:08:58.714 }, 00:08:58.714 "bdev_nvme": { 00:08:58.714 "mask": "0x4000", 00:08:58.714 "tpoint_mask": "0x0" 00:08:58.714 }, 00:08:58.714 "sock": { 00:08:58.714 "mask": "0x8000", 00:08:58.714 "tpoint_mask": "0x0" 00:08:58.714 }, 00:08:58.714 "blob": { 00:08:58.714 "mask": "0x10000", 00:08:58.714 "tpoint_mask": "0x0" 00:08:58.714 }, 00:08:58.714 "bdev_raid": { 00:08:58.714 "mask": "0x20000", 00:08:58.714 "tpoint_mask": "0x0" 00:08:58.714 }, 00:08:58.714 "scheduler": { 00:08:58.714 "mask": "0x40000", 00:08:58.714 "tpoint_mask": "0x0" 00:08:58.714 } 00:08:58.714 }' 00:08:58.714 09:19:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:08:58.714 09:19:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:08:58.714 09:19:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:08:58.714 09:19:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:08:58.714 09:19:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:08:58.714 09:19:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:08:58.714 09:19:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:08:58.714 09:19:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:08:58.714 09:19:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:08:58.714 09:19:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:08:58.714 00:08:58.714 real 0m0.230s 00:08:58.714 user 0m0.172s 00:08:58.714 sys 0m0.048s 00:08:58.714 ************************************ 00:08:58.714 END TEST rpc_trace_cmd_test 00:08:58.714 ************************************ 00:08:58.714 09:19:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:58.714 09:19:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.972 09:19:36 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:08:58.972 09:19:36 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:08:58.972 09:19:36 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:08:58.972 09:19:36 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:58.972 09:19:36 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:58.972 09:19:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:58.972 ************************************ 00:08:58.972 START TEST rpc_daemon_integrity 00:08:58.972 ************************************ 00:08:58.972 09:19:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:08:58.972 09:19:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:58.972 09:19:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.972 09:19:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:58.972 09:19:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.972 09:19:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:58.972 09:19:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:58.972 09:19:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:58.972 09:19:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:58.972 09:19:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.972 09:19:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:58.972 09:19:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.972 09:19:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:08:58.972 09:19:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:58.972 09:19:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.972 09:19:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:58.972 09:19:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.972 09:19:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:58.972 { 00:08:58.972 "name": "Malloc2", 00:08:58.972 "aliases": [ 00:08:58.972 "308612a8-6edb-4e5a-a17b-78470e53a703" 00:08:58.972 ], 00:08:58.972 "product_name": "Malloc disk", 00:08:58.972 "block_size": 512, 00:08:58.972 "num_blocks": 16384, 00:08:58.972 "uuid": "308612a8-6edb-4e5a-a17b-78470e53a703", 00:08:58.972 "assigned_rate_limits": { 00:08:58.972 "rw_ios_per_sec": 0, 00:08:58.972 "rw_mbytes_per_sec": 0, 00:08:58.972 "r_mbytes_per_sec": 0, 00:08:58.972 "w_mbytes_per_sec": 0 00:08:58.972 }, 00:08:58.972 "claimed": false, 00:08:58.972 "zoned": false, 00:08:58.972 "supported_io_types": { 00:08:58.972 "read": true, 00:08:58.972 "write": true, 00:08:58.972 "unmap": true, 00:08:58.972 "flush": true, 00:08:58.972 "reset": true, 00:08:58.972 "nvme_admin": false, 00:08:58.972 "nvme_io": false, 00:08:58.972 "nvme_io_md": false, 00:08:58.972 "write_zeroes": true, 00:08:58.972 "zcopy": true, 00:08:58.972 "get_zone_info": false, 00:08:58.972 "zone_management": false, 00:08:58.972 "zone_append": false, 00:08:58.972 "compare": false, 00:08:58.972 "compare_and_write": false, 00:08:58.972 "abort": true, 00:08:58.972 "seek_hole": false, 00:08:58.972 "seek_data": false, 00:08:58.972 "copy": true, 00:08:58.972 "nvme_iov_md": false 00:08:58.972 }, 00:08:58.972 "memory_domains": [ 00:08:58.972 { 00:08:58.972 "dma_device_id": "system", 00:08:58.972 "dma_device_type": 1 00:08:58.973 }, 00:08:58.973 { 00:08:58.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.973 "dma_device_type": 2 00:08:58.973 } 00:08:58.973 ], 00:08:58.973 "driver_specific": {} 00:08:58.973 } 00:08:58.973 ]' 00:08:58.973 09:19:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:58.973 09:19:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:58.973 09:19:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:08:58.973 09:19:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.973 09:19:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:58.973 [2024-12-09 09:19:36.617039] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:08:58.973 [2024-12-09 09:19:36.617110] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:58.973 [2024-12-09 09:19:36.617130] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x159d430 00:08:58.973 [2024-12-09 09:19:36.617140] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:58.973 [2024-12-09 09:19:36.618826] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:58.973 [2024-12-09 09:19:36.618864] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:58.973 Passthru0 00:08:58.973 09:19:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.973 09:19:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:58.973 09:19:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.973 09:19:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:58.973 09:19:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.973 09:19:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:58.973 { 00:08:58.973 "name": "Malloc2", 00:08:58.973 "aliases": [ 00:08:58.973 "308612a8-6edb-4e5a-a17b-78470e53a703" 00:08:58.973 ], 00:08:58.973 "product_name": "Malloc disk", 00:08:58.973 "block_size": 512, 00:08:58.973 "num_blocks": 16384, 00:08:58.973 "uuid": "308612a8-6edb-4e5a-a17b-78470e53a703", 00:08:58.973 "assigned_rate_limits": { 00:08:58.973 "rw_ios_per_sec": 0, 00:08:58.973 "rw_mbytes_per_sec": 0, 00:08:58.973 "r_mbytes_per_sec": 0, 00:08:58.973 "w_mbytes_per_sec": 0 00:08:58.973 }, 00:08:58.973 "claimed": true, 00:08:58.973 "claim_type": "exclusive_write", 00:08:58.973 "zoned": false, 00:08:58.973 "supported_io_types": { 00:08:58.973 "read": true, 00:08:58.973 "write": true, 00:08:58.973 "unmap": true, 00:08:58.973 "flush": true, 00:08:58.973 "reset": true, 00:08:58.973 "nvme_admin": false, 00:08:58.973 "nvme_io": false, 00:08:58.973 "nvme_io_md": false, 00:08:58.973 "write_zeroes": true, 00:08:58.973 "zcopy": true, 00:08:58.973 "get_zone_info": false, 00:08:58.973 "zone_management": false, 00:08:58.973 "zone_append": false, 00:08:58.973 "compare": false, 00:08:58.973 "compare_and_write": false, 00:08:58.973 "abort": true, 00:08:58.973 "seek_hole": false, 00:08:58.973 "seek_data": false, 00:08:58.973 "copy": true, 00:08:58.973 "nvme_iov_md": false 00:08:58.973 }, 00:08:58.973 "memory_domains": [ 00:08:58.973 { 00:08:58.973 "dma_device_id": "system", 00:08:58.973 "dma_device_type": 1 00:08:58.973 }, 00:08:58.973 { 00:08:58.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.973 "dma_device_type": 2 00:08:58.973 } 00:08:58.973 ], 00:08:58.973 "driver_specific": {} 00:08:58.973 }, 00:08:58.973 { 00:08:58.973 "name": "Passthru0", 00:08:58.973 "aliases": [ 00:08:58.973 "ca4d0540-84e0-507e-874a-2e2db7d07617" 00:08:58.973 ], 00:08:58.973 "product_name": "passthru", 00:08:58.973 "block_size": 512, 00:08:58.973 "num_blocks": 16384, 00:08:58.973 "uuid": "ca4d0540-84e0-507e-874a-2e2db7d07617", 00:08:58.973 "assigned_rate_limits": { 00:08:58.973 "rw_ios_per_sec": 0, 00:08:58.973 "rw_mbytes_per_sec": 0, 00:08:58.973 "r_mbytes_per_sec": 0, 00:08:58.973 "w_mbytes_per_sec": 0 00:08:58.973 }, 00:08:58.973 "claimed": false, 00:08:58.973 "zoned": false, 00:08:58.973 "supported_io_types": { 00:08:58.973 "read": true, 00:08:58.973 "write": true, 00:08:58.973 "unmap": true, 00:08:58.973 "flush": true, 00:08:58.973 "reset": true, 00:08:58.973 "nvme_admin": false, 00:08:58.973 "nvme_io": false, 00:08:58.973 "nvme_io_md": false, 00:08:58.973 "write_zeroes": true, 00:08:58.973 "zcopy": true, 00:08:58.973 "get_zone_info": false, 00:08:58.973 "zone_management": false, 00:08:58.973 "zone_append": false, 00:08:58.973 "compare": false, 00:08:58.973 "compare_and_write": false, 00:08:58.973 "abort": true, 00:08:58.973 "seek_hole": false, 00:08:58.973 "seek_data": false, 00:08:58.973 "copy": true, 00:08:58.973 "nvme_iov_md": false 00:08:58.973 }, 00:08:58.973 "memory_domains": [ 00:08:58.973 { 00:08:58.973 "dma_device_id": "system", 00:08:58.973 "dma_device_type": 1 00:08:58.973 }, 00:08:58.973 { 00:08:58.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.973 "dma_device_type": 2 00:08:58.973 } 00:08:58.973 ], 00:08:58.973 "driver_specific": { 00:08:58.973 "passthru": { 00:08:58.973 "name": "Passthru0", 00:08:58.973 "base_bdev_name": "Malloc2" 00:08:58.973 } 00:08:58.973 } 00:08:58.973 } 00:08:58.973 ]' 00:08:58.973 09:19:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:59.231 09:19:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:59.231 09:19:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:59.231 09:19:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.231 09:19:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:59.231 09:19:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.231 09:19:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:08:59.231 09:19:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.231 09:19:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:59.231 09:19:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.231 09:19:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:59.231 09:19:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.231 09:19:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:59.231 09:19:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.231 09:19:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:59.231 09:19:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:59.231 09:19:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:59.231 00:08:59.231 real 0m0.306s 00:08:59.231 user 0m0.181s 00:08:59.231 sys 0m0.056s 00:08:59.231 ************************************ 00:08:59.231 END TEST rpc_daemon_integrity 00:08:59.231 ************************************ 00:08:59.231 09:19:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:59.231 09:19:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:59.231 09:19:36 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:08:59.231 09:19:36 rpc -- rpc/rpc.sh@84 -- # killprocess 56694 00:08:59.231 09:19:36 rpc -- common/autotest_common.sh@954 -- # '[' -z 56694 ']' 00:08:59.231 09:19:36 rpc -- common/autotest_common.sh@958 -- # kill -0 56694 00:08:59.231 09:19:36 rpc -- common/autotest_common.sh@959 -- # uname 00:08:59.231 09:19:36 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:59.231 09:19:36 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56694 00:08:59.231 09:19:36 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:59.231 killing process with pid 56694 00:08:59.231 09:19:36 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:59.231 09:19:36 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56694' 00:08:59.231 09:19:36 rpc -- common/autotest_common.sh@973 -- # kill 56694 00:08:59.231 09:19:36 rpc -- common/autotest_common.sh@978 -- # wait 56694 00:09:00.168 00:09:00.168 real 0m3.123s 00:09:00.168 user 0m3.730s 00:09:00.168 sys 0m0.805s 00:09:00.168 09:19:37 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:00.168 09:19:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.168 ************************************ 00:09:00.168 END TEST rpc 00:09:00.168 ************************************ 00:09:00.168 09:19:37 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:00.168 09:19:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:00.168 09:19:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:00.168 09:19:37 -- common/autotest_common.sh@10 -- # set +x 00:09:00.168 ************************************ 00:09:00.168 START TEST skip_rpc 00:09:00.168 ************************************ 00:09:00.168 09:19:37 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:00.168 * Looking for test storage... 00:09:00.168 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:00.168 09:19:37 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:00.168 09:19:37 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:00.168 09:19:37 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:00.168 09:19:37 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:00.168 09:19:37 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:00.168 09:19:37 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:00.168 09:19:37 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:00.168 09:19:37 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:00.168 09:19:37 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:00.168 09:19:37 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:00.168 09:19:37 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:00.168 09:19:37 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:00.168 09:19:37 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:00.168 09:19:37 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:00.168 09:19:37 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:00.168 09:19:37 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:00.168 09:19:37 skip_rpc -- scripts/common.sh@345 -- # : 1 00:09:00.168 09:19:37 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:00.168 09:19:37 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:00.168 09:19:37 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:00.168 09:19:37 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:09:00.168 09:19:37 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:00.168 09:19:37 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:09:00.168 09:19:37 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:00.168 09:19:37 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:00.168 09:19:37 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:09:00.168 09:19:37 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:00.168 09:19:37 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:09:00.168 09:19:37 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:00.169 09:19:37 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:00.169 09:19:37 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:00.169 09:19:37 skip_rpc -- scripts/common.sh@368 -- # return 0 00:09:00.169 09:19:37 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:00.169 09:19:37 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:00.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.169 --rc genhtml_branch_coverage=1 00:09:00.169 --rc genhtml_function_coverage=1 00:09:00.169 --rc genhtml_legend=1 00:09:00.169 --rc geninfo_all_blocks=1 00:09:00.169 --rc geninfo_unexecuted_blocks=1 00:09:00.169 00:09:00.169 ' 00:09:00.169 09:19:37 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:00.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.169 --rc genhtml_branch_coverage=1 00:09:00.169 --rc genhtml_function_coverage=1 00:09:00.169 --rc genhtml_legend=1 00:09:00.169 --rc geninfo_all_blocks=1 00:09:00.169 --rc geninfo_unexecuted_blocks=1 00:09:00.169 00:09:00.169 ' 00:09:00.169 09:19:37 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:00.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.169 --rc genhtml_branch_coverage=1 00:09:00.169 --rc genhtml_function_coverage=1 00:09:00.169 --rc genhtml_legend=1 00:09:00.169 --rc geninfo_all_blocks=1 00:09:00.169 --rc geninfo_unexecuted_blocks=1 00:09:00.169 00:09:00.169 ' 00:09:00.169 09:19:37 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:00.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.169 --rc genhtml_branch_coverage=1 00:09:00.169 --rc genhtml_function_coverage=1 00:09:00.169 --rc genhtml_legend=1 00:09:00.169 --rc geninfo_all_blocks=1 00:09:00.169 --rc geninfo_unexecuted_blocks=1 00:09:00.169 00:09:00.169 ' 00:09:00.169 09:19:37 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:00.169 09:19:37 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:00.169 09:19:37 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:09:00.169 09:19:37 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:00.169 09:19:37 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:00.169 09:19:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.169 ************************************ 00:09:00.169 START TEST skip_rpc 00:09:00.169 ************************************ 00:09:00.169 09:19:37 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:09:00.169 09:19:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56906 00:09:00.169 09:19:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:09:00.169 09:19:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:00.169 09:19:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:09:00.428 [2024-12-09 09:19:37.906169] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:09:00.428 [2024-12-09 09:19:37.906449] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56906 ] 00:09:00.428 [2024-12-09 09:19:38.054837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.428 [2024-12-09 09:19:38.136059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.687 [2024-12-09 09:19:38.232283] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:05.957 09:19:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:09:05.957 09:19:42 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:09:05.957 09:19:42 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:09:05.957 09:19:42 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:05.957 09:19:42 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:05.957 09:19:42 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:05.957 09:19:42 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:05.957 09:19:42 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:09:05.957 09:19:42 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.957 09:19:42 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.957 09:19:42 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:05.957 09:19:42 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:09:05.957 09:19:42 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:05.957 09:19:42 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:05.957 09:19:42 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:05.957 09:19:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:09:05.957 09:19:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56906 00:09:05.957 09:19:42 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 56906 ']' 00:09:05.957 09:19:42 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 56906 00:09:05.957 09:19:42 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:09:05.957 09:19:42 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:05.957 09:19:42 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56906 00:09:05.957 killing process with pid 56906 00:09:05.957 09:19:42 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:05.957 09:19:42 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:05.957 09:19:42 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56906' 00:09:05.957 09:19:42 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 56906 00:09:05.957 09:19:42 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 56906 00:09:05.957 00:09:05.957 real 0m5.375s 00:09:05.957 user 0m4.903s 00:09:05.957 sys 0m0.397s 00:09:05.957 09:19:43 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:05.957 ************************************ 00:09:05.957 END TEST skip_rpc 00:09:05.957 ************************************ 00:09:05.957 09:19:43 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.957 09:19:43 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:09:05.957 09:19:43 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:05.957 09:19:43 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:05.957 09:19:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.957 ************************************ 00:09:05.957 START TEST skip_rpc_with_json 00:09:05.957 ************************************ 00:09:05.957 09:19:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:09:05.957 09:19:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:09:05.957 09:19:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=56987 00:09:05.957 09:19:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:05.957 09:19:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:05.957 09:19:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 56987 00:09:05.957 09:19:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 56987 ']' 00:09:05.957 09:19:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.957 09:19:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:05.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.957 09:19:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.957 09:19:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:05.957 09:19:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:05.957 [2024-12-09 09:19:43.356996] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:09:05.957 [2024-12-09 09:19:43.357076] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56987 ] 00:09:05.957 [2024-12-09 09:19:43.508672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.957 [2024-12-09 09:19:43.561134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.957 [2024-12-09 09:19:43.618720] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:06.891 09:19:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:06.891 09:19:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:09:06.891 09:19:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:09:06.891 09:19:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.891 09:19:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:06.891 [2024-12-09 09:19:44.283815] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:09:06.891 request: 00:09:06.891 { 00:09:06.891 "trtype": "tcp", 00:09:06.891 "method": "nvmf_get_transports", 00:09:06.892 "req_id": 1 00:09:06.892 } 00:09:06.892 Got JSON-RPC error response 00:09:06.892 response: 00:09:06.892 { 00:09:06.892 "code": -19, 00:09:06.892 "message": "No such device" 00:09:06.892 } 00:09:06.892 09:19:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:06.892 09:19:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:09:06.892 09:19:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.892 09:19:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:06.892 [2024-12-09 09:19:44.295896] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:06.892 09:19:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.892 09:19:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:09:06.892 09:19:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.892 09:19:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:06.892 09:19:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.892 09:19:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:06.892 { 00:09:06.892 "subsystems": [ 00:09:06.892 { 00:09:06.892 "subsystem": "fsdev", 00:09:06.892 "config": [ 00:09:06.892 { 00:09:06.892 "method": "fsdev_set_opts", 00:09:06.892 "params": { 00:09:06.892 "fsdev_io_pool_size": 65535, 00:09:06.892 "fsdev_io_cache_size": 256 00:09:06.892 } 00:09:06.892 } 00:09:06.892 ] 00:09:06.892 }, 00:09:06.892 { 00:09:06.892 "subsystem": "keyring", 00:09:06.892 "config": [] 00:09:06.892 }, 00:09:06.892 { 00:09:06.892 "subsystem": "iobuf", 00:09:06.892 "config": [ 00:09:06.892 { 00:09:06.892 "method": "iobuf_set_options", 00:09:06.892 "params": { 00:09:06.892 "small_pool_count": 8192, 00:09:06.892 "large_pool_count": 1024, 00:09:06.892 "small_bufsize": 8192, 00:09:06.892 "large_bufsize": 135168, 00:09:06.892 "enable_numa": false 00:09:06.892 } 00:09:06.892 } 00:09:06.892 ] 00:09:06.892 }, 00:09:06.892 { 00:09:06.892 "subsystem": "sock", 00:09:06.892 "config": [ 00:09:06.892 { 00:09:06.892 "method": "sock_set_default_impl", 00:09:06.892 "params": { 00:09:06.892 "impl_name": "uring" 00:09:06.892 } 00:09:06.892 }, 00:09:06.892 { 00:09:06.892 "method": "sock_impl_set_options", 00:09:06.892 "params": { 00:09:06.892 "impl_name": "ssl", 00:09:06.892 "recv_buf_size": 4096, 00:09:06.892 "send_buf_size": 4096, 00:09:06.892 "enable_recv_pipe": true, 00:09:06.892 "enable_quickack": false, 00:09:06.892 "enable_placement_id": 0, 00:09:06.892 "enable_zerocopy_send_server": true, 00:09:06.892 "enable_zerocopy_send_client": false, 00:09:06.892 "zerocopy_threshold": 0, 00:09:06.892 "tls_version": 0, 00:09:06.892 "enable_ktls": false 00:09:06.892 } 00:09:06.892 }, 00:09:06.892 { 00:09:06.892 "method": "sock_impl_set_options", 00:09:06.892 "params": { 00:09:06.892 "impl_name": "posix", 00:09:06.892 "recv_buf_size": 2097152, 00:09:06.892 "send_buf_size": 2097152, 00:09:06.892 "enable_recv_pipe": true, 00:09:06.892 "enable_quickack": false, 00:09:06.892 "enable_placement_id": 0, 00:09:06.892 "enable_zerocopy_send_server": true, 00:09:06.892 "enable_zerocopy_send_client": false, 00:09:06.892 "zerocopy_threshold": 0, 00:09:06.892 "tls_version": 0, 00:09:06.892 "enable_ktls": false 00:09:06.892 } 00:09:06.892 }, 00:09:06.892 { 00:09:06.892 "method": "sock_impl_set_options", 00:09:06.892 "params": { 00:09:06.892 "impl_name": "uring", 00:09:06.892 "recv_buf_size": 2097152, 00:09:06.892 "send_buf_size": 2097152, 00:09:06.892 "enable_recv_pipe": true, 00:09:06.892 "enable_quickack": false, 00:09:06.892 "enable_placement_id": 0, 00:09:06.892 "enable_zerocopy_send_server": false, 00:09:06.892 "enable_zerocopy_send_client": false, 00:09:06.892 "zerocopy_threshold": 0, 00:09:06.892 "tls_version": 0, 00:09:06.892 "enable_ktls": false 00:09:06.892 } 00:09:06.892 } 00:09:06.892 ] 00:09:06.892 }, 00:09:06.892 { 00:09:06.892 "subsystem": "vmd", 00:09:06.892 "config": [] 00:09:06.892 }, 00:09:06.892 { 00:09:06.892 "subsystem": "accel", 00:09:06.892 "config": [ 00:09:06.892 { 00:09:06.892 "method": "accel_set_options", 00:09:06.892 "params": { 00:09:06.892 "small_cache_size": 128, 00:09:06.892 "large_cache_size": 16, 00:09:06.892 "task_count": 2048, 00:09:06.892 "sequence_count": 2048, 00:09:06.892 "buf_count": 2048 00:09:06.892 } 00:09:06.892 } 00:09:06.892 ] 00:09:06.892 }, 00:09:06.892 { 00:09:06.892 "subsystem": "bdev", 00:09:06.892 "config": [ 00:09:06.892 { 00:09:06.892 "method": "bdev_set_options", 00:09:06.892 "params": { 00:09:06.892 "bdev_io_pool_size": 65535, 00:09:06.892 "bdev_io_cache_size": 256, 00:09:06.892 "bdev_auto_examine": true, 00:09:06.892 "iobuf_small_cache_size": 128, 00:09:06.892 "iobuf_large_cache_size": 16 00:09:06.892 } 00:09:06.892 }, 00:09:06.892 { 00:09:06.892 "method": "bdev_raid_set_options", 00:09:06.892 "params": { 00:09:06.892 "process_window_size_kb": 1024, 00:09:06.892 "process_max_bandwidth_mb_sec": 0 00:09:06.892 } 00:09:06.892 }, 00:09:06.892 { 00:09:06.892 "method": "bdev_iscsi_set_options", 00:09:06.892 "params": { 00:09:06.892 "timeout_sec": 30 00:09:06.892 } 00:09:06.892 }, 00:09:06.892 { 00:09:06.892 "method": "bdev_nvme_set_options", 00:09:06.892 "params": { 00:09:06.892 "action_on_timeout": "none", 00:09:06.892 "timeout_us": 0, 00:09:06.892 "timeout_admin_us": 0, 00:09:06.892 "keep_alive_timeout_ms": 10000, 00:09:06.892 "arbitration_burst": 0, 00:09:06.892 "low_priority_weight": 0, 00:09:06.892 "medium_priority_weight": 0, 00:09:06.892 "high_priority_weight": 0, 00:09:06.892 "nvme_adminq_poll_period_us": 10000, 00:09:06.892 "nvme_ioq_poll_period_us": 0, 00:09:06.892 "io_queue_requests": 0, 00:09:06.892 "delay_cmd_submit": true, 00:09:06.892 "transport_retry_count": 4, 00:09:06.892 "bdev_retry_count": 3, 00:09:06.892 "transport_ack_timeout": 0, 00:09:06.892 "ctrlr_loss_timeout_sec": 0, 00:09:06.892 "reconnect_delay_sec": 0, 00:09:06.892 "fast_io_fail_timeout_sec": 0, 00:09:06.892 "disable_auto_failback": false, 00:09:06.892 "generate_uuids": false, 00:09:06.892 "transport_tos": 0, 00:09:06.892 "nvme_error_stat": false, 00:09:06.892 "rdma_srq_size": 0, 00:09:06.892 "io_path_stat": false, 00:09:06.892 "allow_accel_sequence": false, 00:09:06.892 "rdma_max_cq_size": 0, 00:09:06.892 "rdma_cm_event_timeout_ms": 0, 00:09:06.892 "dhchap_digests": [ 00:09:06.892 "sha256", 00:09:06.892 "sha384", 00:09:06.892 "sha512" 00:09:06.892 ], 00:09:06.892 "dhchap_dhgroups": [ 00:09:06.892 "null", 00:09:06.892 "ffdhe2048", 00:09:06.892 "ffdhe3072", 00:09:06.892 "ffdhe4096", 00:09:06.892 "ffdhe6144", 00:09:06.892 "ffdhe8192" 00:09:06.892 ] 00:09:06.892 } 00:09:06.892 }, 00:09:06.892 { 00:09:06.892 "method": "bdev_nvme_set_hotplug", 00:09:06.892 "params": { 00:09:06.892 "period_us": 100000, 00:09:06.892 "enable": false 00:09:06.892 } 00:09:06.892 }, 00:09:06.892 { 00:09:06.892 "method": "bdev_wait_for_examine" 00:09:06.892 } 00:09:06.892 ] 00:09:06.892 }, 00:09:06.892 { 00:09:06.892 "subsystem": "scsi", 00:09:06.892 "config": null 00:09:06.892 }, 00:09:06.892 { 00:09:06.892 "subsystem": "scheduler", 00:09:06.892 "config": [ 00:09:06.892 { 00:09:06.892 "method": "framework_set_scheduler", 00:09:06.892 "params": { 00:09:06.892 "name": "static" 00:09:06.892 } 00:09:06.892 } 00:09:06.892 ] 00:09:06.892 }, 00:09:06.892 { 00:09:06.892 "subsystem": "vhost_scsi", 00:09:06.892 "config": [] 00:09:06.892 }, 00:09:06.892 { 00:09:06.892 "subsystem": "vhost_blk", 00:09:06.892 "config": [] 00:09:06.892 }, 00:09:06.892 { 00:09:06.892 "subsystem": "ublk", 00:09:06.892 "config": [] 00:09:06.892 }, 00:09:06.892 { 00:09:06.892 "subsystem": "nbd", 00:09:06.892 "config": [] 00:09:06.892 }, 00:09:06.892 { 00:09:06.892 "subsystem": "nvmf", 00:09:06.892 "config": [ 00:09:06.892 { 00:09:06.892 "method": "nvmf_set_config", 00:09:06.892 "params": { 00:09:06.892 "discovery_filter": "match_any", 00:09:06.892 "admin_cmd_passthru": { 00:09:06.892 "identify_ctrlr": false 00:09:06.892 }, 00:09:06.892 "dhchap_digests": [ 00:09:06.892 "sha256", 00:09:06.892 "sha384", 00:09:06.892 "sha512" 00:09:06.892 ], 00:09:06.892 "dhchap_dhgroups": [ 00:09:06.892 "null", 00:09:06.892 "ffdhe2048", 00:09:06.892 "ffdhe3072", 00:09:06.892 "ffdhe4096", 00:09:06.892 "ffdhe6144", 00:09:06.892 "ffdhe8192" 00:09:06.892 ] 00:09:06.892 } 00:09:06.892 }, 00:09:06.892 { 00:09:06.892 "method": "nvmf_set_max_subsystems", 00:09:06.892 "params": { 00:09:06.892 "max_subsystems": 1024 00:09:06.892 } 00:09:06.892 }, 00:09:06.892 { 00:09:06.892 "method": "nvmf_set_crdt", 00:09:06.892 "params": { 00:09:06.892 "crdt1": 0, 00:09:06.892 "crdt2": 0, 00:09:06.892 "crdt3": 0 00:09:06.892 } 00:09:06.892 }, 00:09:06.892 { 00:09:06.892 "method": "nvmf_create_transport", 00:09:06.892 "params": { 00:09:06.892 "trtype": "TCP", 00:09:06.892 "max_queue_depth": 128, 00:09:06.892 "max_io_qpairs_per_ctrlr": 127, 00:09:06.892 "in_capsule_data_size": 4096, 00:09:06.892 "max_io_size": 131072, 00:09:06.892 "io_unit_size": 131072, 00:09:06.892 "max_aq_depth": 128, 00:09:06.892 "num_shared_buffers": 511, 00:09:06.892 "buf_cache_size": 4294967295, 00:09:06.892 "dif_insert_or_strip": false, 00:09:06.892 "zcopy": false, 00:09:06.892 "c2h_success": true, 00:09:06.892 "sock_priority": 0, 00:09:06.892 "abort_timeout_sec": 1, 00:09:06.892 "ack_timeout": 0, 00:09:06.892 "data_wr_pool_size": 0 00:09:06.892 } 00:09:06.892 } 00:09:06.892 ] 00:09:06.892 }, 00:09:06.892 { 00:09:06.892 "subsystem": "iscsi", 00:09:06.892 "config": [ 00:09:06.892 { 00:09:06.892 "method": "iscsi_set_options", 00:09:06.892 "params": { 00:09:06.892 "node_base": "iqn.2016-06.io.spdk", 00:09:06.892 "max_sessions": 128, 00:09:06.892 "max_connections_per_session": 2, 00:09:06.892 "max_queue_depth": 64, 00:09:06.892 "default_time2wait": 2, 00:09:06.892 "default_time2retain": 20, 00:09:06.892 "first_burst_length": 8192, 00:09:06.892 "immediate_data": true, 00:09:06.892 "allow_duplicated_isid": false, 00:09:06.892 "error_recovery_level": 0, 00:09:06.892 "nop_timeout": 60, 00:09:06.892 "nop_in_interval": 30, 00:09:06.892 "disable_chap": false, 00:09:06.892 "require_chap": false, 00:09:06.892 "mutual_chap": false, 00:09:06.892 "chap_group": 0, 00:09:06.892 "max_large_datain_per_connection": 64, 00:09:06.892 "max_r2t_per_connection": 4, 00:09:06.892 "pdu_pool_size": 36864, 00:09:06.892 "immediate_data_pool_size": 16384, 00:09:06.892 "data_out_pool_size": 2048 00:09:06.892 } 00:09:06.892 } 00:09:06.892 ] 00:09:06.892 } 00:09:06.892 ] 00:09:06.892 } 00:09:06.892 09:19:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:06.892 09:19:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 56987 00:09:06.892 09:19:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 56987 ']' 00:09:06.892 09:19:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 56987 00:09:06.892 09:19:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:09:06.892 09:19:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:06.892 09:19:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56987 00:09:06.892 killing process with pid 56987 00:09:06.892 09:19:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:06.892 09:19:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:06.892 09:19:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56987' 00:09:06.892 09:19:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 56987 00:09:06.892 09:19:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 56987 00:09:07.151 09:19:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57014 00:09:07.151 09:19:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:09:07.151 09:19:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:12.434 09:19:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57014 00:09:12.434 09:19:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57014 ']' 00:09:12.434 09:19:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57014 00:09:12.434 09:19:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:09:12.434 09:19:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:12.434 09:19:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57014 00:09:12.434 killing process with pid 57014 00:09:12.434 09:19:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:12.434 09:19:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:12.434 09:19:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57014' 00:09:12.434 09:19:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57014 00:09:12.434 09:19:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57014 00:09:12.693 09:19:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:12.693 09:19:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:12.693 00:09:12.693 real 0m6.917s 00:09:12.693 user 0m6.680s 00:09:12.693 sys 0m0.613s 00:09:12.693 ************************************ 00:09:12.693 END TEST skip_rpc_with_json 00:09:12.693 ************************************ 00:09:12.693 09:19:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:12.693 09:19:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:12.693 09:19:50 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:09:12.693 09:19:50 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:12.693 09:19:50 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:12.693 09:19:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:12.693 ************************************ 00:09:12.693 START TEST skip_rpc_with_delay 00:09:12.693 ************************************ 00:09:12.693 09:19:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:09:12.693 09:19:50 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:12.693 09:19:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:09:12.693 09:19:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:12.693 09:19:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:12.693 09:19:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:12.693 09:19:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:12.693 09:19:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:12.693 09:19:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:12.693 09:19:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:12.693 09:19:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:12.693 09:19:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:12.693 09:19:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:12.693 [2024-12-09 09:19:50.348789] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:09:12.693 09:19:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:09:12.693 09:19:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:12.693 09:19:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:12.693 09:19:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:12.693 00:09:12.693 real 0m0.081s 00:09:12.693 user 0m0.052s 00:09:12.693 sys 0m0.028s 00:09:12.693 ************************************ 00:09:12.693 END TEST skip_rpc_with_delay 00:09:12.693 ************************************ 00:09:12.693 09:19:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:12.693 09:19:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:09:12.953 09:19:50 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:09:12.953 09:19:50 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:09:12.953 09:19:50 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:09:12.953 09:19:50 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:12.953 09:19:50 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:12.953 09:19:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:12.953 ************************************ 00:09:12.953 START TEST exit_on_failed_rpc_init 00:09:12.953 ************************************ 00:09:12.953 09:19:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:09:12.953 09:19:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57124 00:09:12.953 09:19:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:12.953 09:19:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57124 00:09:12.953 09:19:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57124 ']' 00:09:12.953 09:19:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.953 09:19:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:12.953 09:19:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.953 09:19:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:12.953 09:19:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:12.953 [2024-12-09 09:19:50.501953] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:09:12.953 [2024-12-09 09:19:50.502223] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57124 ] 00:09:12.953 [2024-12-09 09:19:50.651421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.212 [2024-12-09 09:19:50.704934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.212 [2024-12-09 09:19:50.761809] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:13.778 09:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:13.778 09:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:09:13.778 09:19:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:13.778 09:19:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:13.778 09:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:09:13.778 09:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:13.778 09:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:13.778 09:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:13.778 09:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:13.778 09:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:13.778 09:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:13.778 09:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:13.778 09:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:13.778 09:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:13.778 09:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:13.778 [2024-12-09 09:19:51.445691] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:09:13.778 [2024-12-09 09:19:51.445773] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57142 ] 00:09:14.046 [2024-12-09 09:19:51.594646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.046 [2024-12-09 09:19:51.650655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.046 [2024-12-09 09:19:51.650952] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:09:14.046 [2024-12-09 09:19:51.651080] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:09:14.046 [2024-12-09 09:19:51.651113] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:14.046 09:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:09:14.046 09:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:14.046 09:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:09:14.046 09:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:09:14.046 09:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:09:14.046 09:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:14.046 09:19:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:14.046 09:19:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57124 00:09:14.046 09:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57124 ']' 00:09:14.046 09:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57124 00:09:14.046 09:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:09:14.046 09:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:14.046 09:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57124 00:09:14.046 09:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:14.046 09:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:14.046 09:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57124' 00:09:14.046 killing process with pid 57124 00:09:14.046 09:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57124 00:09:14.046 09:19:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57124 00:09:14.613 00:09:14.613 real 0m1.636s 00:09:14.613 user 0m1.825s 00:09:14.613 sys 0m0.389s 00:09:14.613 09:19:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.613 ************************************ 00:09:14.613 09:19:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:14.613 END TEST exit_on_failed_rpc_init 00:09:14.613 ************************************ 00:09:14.613 09:19:52 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:14.613 ************************************ 00:09:14.613 END TEST skip_rpc 00:09:14.613 ************************************ 00:09:14.613 00:09:14.613 real 0m14.540s 00:09:14.613 user 0m13.697s 00:09:14.613 sys 0m1.739s 00:09:14.613 09:19:52 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.613 09:19:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.613 09:19:52 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:14.613 09:19:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:14.613 09:19:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.613 09:19:52 -- common/autotest_common.sh@10 -- # set +x 00:09:14.613 ************************************ 00:09:14.613 START TEST rpc_client 00:09:14.613 ************************************ 00:09:14.613 09:19:52 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:14.613 * Looking for test storage... 00:09:14.613 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:09:14.613 09:19:52 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:14.872 09:19:52 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:09:14.872 09:19:52 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:14.872 09:19:52 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:14.872 09:19:52 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:14.872 09:19:52 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:14.872 09:19:52 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:14.872 09:19:52 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:09:14.872 09:19:52 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:09:14.872 09:19:52 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:09:14.872 09:19:52 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:09:14.872 09:19:52 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:09:14.872 09:19:52 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:09:14.872 09:19:52 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:09:14.872 09:19:52 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:14.872 09:19:52 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:09:14.872 09:19:52 rpc_client -- scripts/common.sh@345 -- # : 1 00:09:14.872 09:19:52 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:14.872 09:19:52 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:14.872 09:19:52 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:09:14.872 09:19:52 rpc_client -- scripts/common.sh@353 -- # local d=1 00:09:14.872 09:19:52 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:14.872 09:19:52 rpc_client -- scripts/common.sh@355 -- # echo 1 00:09:14.872 09:19:52 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:09:14.872 09:19:52 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:09:14.872 09:19:52 rpc_client -- scripts/common.sh@353 -- # local d=2 00:09:14.872 09:19:52 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:14.872 09:19:52 rpc_client -- scripts/common.sh@355 -- # echo 2 00:09:14.872 09:19:52 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:09:14.872 09:19:52 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:14.872 09:19:52 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:14.872 09:19:52 rpc_client -- scripts/common.sh@368 -- # return 0 00:09:14.872 09:19:52 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:14.872 09:19:52 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:14.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.872 --rc genhtml_branch_coverage=1 00:09:14.872 --rc genhtml_function_coverage=1 00:09:14.872 --rc genhtml_legend=1 00:09:14.872 --rc geninfo_all_blocks=1 00:09:14.872 --rc geninfo_unexecuted_blocks=1 00:09:14.872 00:09:14.872 ' 00:09:14.872 09:19:52 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:14.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.872 --rc genhtml_branch_coverage=1 00:09:14.872 --rc genhtml_function_coverage=1 00:09:14.872 --rc genhtml_legend=1 00:09:14.872 --rc geninfo_all_blocks=1 00:09:14.872 --rc geninfo_unexecuted_blocks=1 00:09:14.872 00:09:14.872 ' 00:09:14.872 09:19:52 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:14.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.872 --rc genhtml_branch_coverage=1 00:09:14.872 --rc genhtml_function_coverage=1 00:09:14.872 --rc genhtml_legend=1 00:09:14.872 --rc geninfo_all_blocks=1 00:09:14.872 --rc geninfo_unexecuted_blocks=1 00:09:14.872 00:09:14.872 ' 00:09:14.872 09:19:52 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:14.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.872 --rc genhtml_branch_coverage=1 00:09:14.872 --rc genhtml_function_coverage=1 00:09:14.872 --rc genhtml_legend=1 00:09:14.872 --rc geninfo_all_blocks=1 00:09:14.872 --rc geninfo_unexecuted_blocks=1 00:09:14.872 00:09:14.872 ' 00:09:14.872 09:19:52 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:09:14.872 OK 00:09:14.872 09:19:52 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:09:14.873 00:09:14.873 real 0m0.262s 00:09:14.873 user 0m0.162s 00:09:14.873 sys 0m0.115s 00:09:14.873 09:19:52 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.873 09:19:52 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:09:14.873 ************************************ 00:09:14.873 END TEST rpc_client 00:09:14.873 ************************************ 00:09:14.873 09:19:52 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:14.873 09:19:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:14.873 09:19:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.873 09:19:52 -- common/autotest_common.sh@10 -- # set +x 00:09:14.873 ************************************ 00:09:14.873 START TEST json_config 00:09:14.873 ************************************ 00:09:14.873 09:19:52 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:15.132 09:19:52 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:15.132 09:19:52 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:09:15.132 09:19:52 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:15.132 09:19:52 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:15.132 09:19:52 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:15.132 09:19:52 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:15.132 09:19:52 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:15.132 09:19:52 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:09:15.132 09:19:52 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:09:15.132 09:19:52 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:09:15.132 09:19:52 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:09:15.132 09:19:52 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:09:15.132 09:19:52 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:09:15.132 09:19:52 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:09:15.132 09:19:52 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:15.132 09:19:52 json_config -- scripts/common.sh@344 -- # case "$op" in 00:09:15.132 09:19:52 json_config -- scripts/common.sh@345 -- # : 1 00:09:15.132 09:19:52 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:15.132 09:19:52 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:15.132 09:19:52 json_config -- scripts/common.sh@365 -- # decimal 1 00:09:15.132 09:19:52 json_config -- scripts/common.sh@353 -- # local d=1 00:09:15.132 09:19:52 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:15.132 09:19:52 json_config -- scripts/common.sh@355 -- # echo 1 00:09:15.132 09:19:52 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:09:15.132 09:19:52 json_config -- scripts/common.sh@366 -- # decimal 2 00:09:15.132 09:19:52 json_config -- scripts/common.sh@353 -- # local d=2 00:09:15.132 09:19:52 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:15.132 09:19:52 json_config -- scripts/common.sh@355 -- # echo 2 00:09:15.132 09:19:52 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:09:15.132 09:19:52 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:15.133 09:19:52 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:15.133 09:19:52 json_config -- scripts/common.sh@368 -- # return 0 00:09:15.133 09:19:52 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:15.133 09:19:52 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:15.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.133 --rc genhtml_branch_coverage=1 00:09:15.133 --rc genhtml_function_coverage=1 00:09:15.133 --rc genhtml_legend=1 00:09:15.133 --rc geninfo_all_blocks=1 00:09:15.133 --rc geninfo_unexecuted_blocks=1 00:09:15.133 00:09:15.133 ' 00:09:15.133 09:19:52 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:15.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.133 --rc genhtml_branch_coverage=1 00:09:15.133 --rc genhtml_function_coverage=1 00:09:15.133 --rc genhtml_legend=1 00:09:15.133 --rc geninfo_all_blocks=1 00:09:15.133 --rc geninfo_unexecuted_blocks=1 00:09:15.133 00:09:15.133 ' 00:09:15.133 09:19:52 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:15.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.133 --rc genhtml_branch_coverage=1 00:09:15.133 --rc genhtml_function_coverage=1 00:09:15.133 --rc genhtml_legend=1 00:09:15.133 --rc geninfo_all_blocks=1 00:09:15.133 --rc geninfo_unexecuted_blocks=1 00:09:15.133 00:09:15.133 ' 00:09:15.133 09:19:52 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:15.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.133 --rc genhtml_branch_coverage=1 00:09:15.133 --rc genhtml_function_coverage=1 00:09:15.133 --rc genhtml_legend=1 00:09:15.133 --rc geninfo_all_blocks=1 00:09:15.133 --rc geninfo_unexecuted_blocks=1 00:09:15.133 00:09:15.133 ' 00:09:15.133 09:19:52 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:15.133 09:19:52 json_config -- nvmf/common.sh@7 -- # uname -s 00:09:15.133 09:19:52 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:15.133 09:19:52 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:15.133 09:19:52 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:15.133 09:19:52 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:15.133 09:19:52 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:15.133 09:19:52 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:15.133 09:19:52 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:15.133 09:19:52 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:15.133 09:19:52 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:15.133 09:19:52 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:15.133 09:19:52 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:09:15.133 09:19:52 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:09:15.133 09:19:52 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:15.133 09:19:52 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:15.133 09:19:52 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:15.133 09:19:52 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:15.133 09:19:52 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:15.133 09:19:52 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:09:15.133 09:19:52 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:15.133 09:19:52 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:15.133 09:19:52 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:15.133 09:19:52 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.133 09:19:52 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.133 09:19:52 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.133 09:19:52 json_config -- paths/export.sh@5 -- # export PATH 00:09:15.133 09:19:52 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.133 09:19:52 json_config -- nvmf/common.sh@51 -- # : 0 00:09:15.133 09:19:52 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:15.133 09:19:52 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:15.133 09:19:52 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:15.133 09:19:52 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:15.133 09:19:52 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:15.133 09:19:52 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:15.133 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:15.133 09:19:52 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:15.133 09:19:52 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:15.133 09:19:52 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:15.133 09:19:52 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:09:15.133 09:19:52 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:09:15.133 09:19:52 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:09:15.133 09:19:52 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:09:15.133 09:19:52 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:09:15.133 09:19:52 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:09:15.133 09:19:52 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:09:15.133 09:19:52 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:09:15.133 09:19:52 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:09:15.133 09:19:52 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:09:15.133 09:19:52 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:09:15.133 09:19:52 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:09:15.133 09:19:52 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:09:15.133 09:19:52 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:09:15.133 09:19:52 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:15.133 09:19:52 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:09:15.133 INFO: JSON configuration test init 00:09:15.133 09:19:52 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:09:15.133 09:19:52 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:09:15.133 09:19:52 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:15.133 09:19:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:15.134 09:19:52 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:09:15.134 09:19:52 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:15.134 09:19:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:15.134 09:19:52 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:09:15.134 09:19:52 json_config -- json_config/common.sh@9 -- # local app=target 00:09:15.134 09:19:52 json_config -- json_config/common.sh@10 -- # shift 00:09:15.134 Waiting for target to run... 00:09:15.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:15.134 09:19:52 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:15.134 09:19:52 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:15.134 09:19:52 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:09:15.134 09:19:52 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:15.134 09:19:52 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:15.134 09:19:52 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57276 00:09:15.134 09:19:52 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:15.134 09:19:52 json_config -- json_config/common.sh@25 -- # waitforlisten 57276 /var/tmp/spdk_tgt.sock 00:09:15.134 09:19:52 json_config -- common/autotest_common.sh@835 -- # '[' -z 57276 ']' 00:09:15.134 09:19:52 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:15.134 09:19:52 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:15.134 09:19:52 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:09:15.134 09:19:52 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:15.134 09:19:52 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:15.134 09:19:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:15.134 [2024-12-09 09:19:52.836773] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:09:15.134 [2024-12-09 09:19:52.837036] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57276 ] 00:09:15.699 [2024-12-09 09:19:53.209326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.699 [2024-12-09 09:19:53.254591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.265 09:19:53 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:16.265 09:19:53 json_config -- common/autotest_common.sh@868 -- # return 0 00:09:16.265 09:19:53 json_config -- json_config/common.sh@26 -- # echo '' 00:09:16.265 00:09:16.265 09:19:53 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:09:16.265 09:19:53 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:09:16.265 09:19:53 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:16.265 09:19:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:16.266 09:19:53 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:09:16.266 09:19:53 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:09:16.266 09:19:53 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:16.266 09:19:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:16.266 09:19:53 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:09:16.266 09:19:53 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:09:16.266 09:19:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:09:16.524 [2024-12-09 09:19:54.015291] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:16.524 09:19:54 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:09:16.524 09:19:54 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:09:16.524 09:19:54 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:16.524 09:19:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:16.524 09:19:54 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:09:16.524 09:19:54 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:09:16.524 09:19:54 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:09:16.524 09:19:54 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:09:16.524 09:19:54 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:09:16.524 09:19:54 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:09:16.524 09:19:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:09:16.524 09:19:54 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:09:16.784 09:19:54 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:09:16.784 09:19:54 json_config -- json_config/json_config.sh@51 -- # local get_types 00:09:16.784 09:19:54 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:09:16.784 09:19:54 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:09:16.784 09:19:54 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:09:16.784 09:19:54 json_config -- json_config/json_config.sh@54 -- # sort 00:09:16.784 09:19:54 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:09:16.784 09:19:54 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:09:16.784 09:19:54 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:09:16.784 09:19:54 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:09:16.784 09:19:54 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:16.784 09:19:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:16.784 09:19:54 json_config -- json_config/json_config.sh@62 -- # return 0 00:09:16.784 09:19:54 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:09:16.784 09:19:54 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:09:16.784 09:19:54 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:09:16.784 09:19:54 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:09:16.784 09:19:54 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:09:16.784 09:19:54 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:09:16.784 09:19:54 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:16.784 09:19:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:17.043 09:19:54 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:09:17.043 09:19:54 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:09:17.043 09:19:54 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:09:17.043 09:19:54 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:09:17.043 09:19:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:09:17.043 MallocForNvmf0 00:09:17.043 09:19:54 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:09:17.043 09:19:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:09:17.342 MallocForNvmf1 00:09:17.342 09:19:54 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:09:17.342 09:19:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:09:17.664 [2024-12-09 09:19:55.178009] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:17.664 09:19:55 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:17.664 09:19:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:17.921 09:19:55 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:09:17.921 09:19:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:09:18.179 09:19:55 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:09:18.179 09:19:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:09:18.437 09:19:55 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:09:18.437 09:19:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:09:18.437 [2024-12-09 09:19:56.144972] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:09:18.696 09:19:56 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:09:18.696 09:19:56 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:18.696 09:19:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:18.696 09:19:56 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:09:18.696 09:19:56 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:18.696 09:19:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:18.696 09:19:56 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:09:18.696 09:19:56 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:18.697 09:19:56 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:18.956 MallocBdevForConfigChangeCheck 00:09:18.956 09:19:56 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:09:18.956 09:19:56 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:18.956 09:19:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:18.956 09:19:56 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:09:18.956 09:19:56 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:19.215 INFO: shutting down applications... 00:09:19.215 09:19:56 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:09:19.215 09:19:56 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:09:19.215 09:19:56 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:09:19.215 09:19:56 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:09:19.215 09:19:56 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:09:19.475 Calling clear_iscsi_subsystem 00:09:19.476 Calling clear_nvmf_subsystem 00:09:19.476 Calling clear_nbd_subsystem 00:09:19.476 Calling clear_ublk_subsystem 00:09:19.476 Calling clear_vhost_blk_subsystem 00:09:19.476 Calling clear_vhost_scsi_subsystem 00:09:19.476 Calling clear_bdev_subsystem 00:09:19.476 09:19:57 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:09:19.735 09:19:57 json_config -- json_config/json_config.sh@350 -- # count=100 00:09:19.735 09:19:57 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:09:19.735 09:19:57 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:09:19.735 09:19:57 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:19.735 09:19:57 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:09:19.995 09:19:57 json_config -- json_config/json_config.sh@352 -- # break 00:09:19.995 09:19:57 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:09:19.995 09:19:57 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:09:19.995 09:19:57 json_config -- json_config/common.sh@31 -- # local app=target 00:09:19.995 09:19:57 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:19.995 09:19:57 json_config -- json_config/common.sh@35 -- # [[ -n 57276 ]] 00:09:19.995 09:19:57 json_config -- json_config/common.sh@38 -- # kill -SIGINT 57276 00:09:19.995 09:19:57 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:19.995 09:19:57 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:19.995 09:19:57 json_config -- json_config/common.sh@41 -- # kill -0 57276 00:09:19.995 09:19:57 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:09:20.563 09:19:58 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:09:20.563 09:19:58 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:20.563 09:19:58 json_config -- json_config/common.sh@41 -- # kill -0 57276 00:09:20.563 09:19:58 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:20.563 09:19:58 json_config -- json_config/common.sh@43 -- # break 00:09:20.563 09:19:58 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:20.563 SPDK target shutdown done 00:09:20.563 INFO: relaunching applications... 00:09:20.563 09:19:58 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:20.563 09:19:58 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:09:20.563 09:19:58 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:20.563 09:19:58 json_config -- json_config/common.sh@9 -- # local app=target 00:09:20.563 09:19:58 json_config -- json_config/common.sh@10 -- # shift 00:09:20.563 09:19:58 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:20.563 09:19:58 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:20.563 09:19:58 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:09:20.563 09:19:58 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:20.563 09:19:58 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:20.563 09:19:58 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57466 00:09:20.563 09:19:58 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:20.563 09:19:58 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:20.564 Waiting for target to run... 00:09:20.564 09:19:58 json_config -- json_config/common.sh@25 -- # waitforlisten 57466 /var/tmp/spdk_tgt.sock 00:09:20.564 09:19:58 json_config -- common/autotest_common.sh@835 -- # '[' -z 57466 ']' 00:09:20.564 09:19:58 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:20.564 09:19:58 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:20.564 09:19:58 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:20.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:20.564 09:19:58 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:20.564 09:19:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:20.564 [2024-12-09 09:19:58.168716] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:09:20.564 [2024-12-09 09:19:58.168814] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57466 ] 00:09:20.822 [2024-12-09 09:19:58.539301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.081 [2024-12-09 09:19:58.584150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.081 [2024-12-09 09:19:58.719360] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:21.340 [2024-12-09 09:19:58.931559] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:21.340 [2024-12-09 09:19:58.963608] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:09:21.598 00:09:21.598 INFO: Checking if target configuration is the same... 00:09:21.598 09:19:59 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:21.598 09:19:59 json_config -- common/autotest_common.sh@868 -- # return 0 00:09:21.598 09:19:59 json_config -- json_config/common.sh@26 -- # echo '' 00:09:21.598 09:19:59 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:09:21.598 09:19:59 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:09:21.598 09:19:59 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:21.598 09:19:59 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:09:21.598 09:19:59 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:21.598 + '[' 2 -ne 2 ']' 00:09:21.598 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:09:21.598 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:09:21.598 + rootdir=/home/vagrant/spdk_repo/spdk 00:09:21.598 +++ basename /dev/fd/62 00:09:21.598 ++ mktemp /tmp/62.XXX 00:09:21.598 + tmp_file_1=/tmp/62.jNt 00:09:21.598 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:21.598 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:21.598 + tmp_file_2=/tmp/spdk_tgt_config.json.r7g 00:09:21.598 + ret=0 00:09:21.598 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:21.856 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:21.856 + diff -u /tmp/62.jNt /tmp/spdk_tgt_config.json.r7g 00:09:21.856 INFO: JSON config files are the same 00:09:21.856 + echo 'INFO: JSON config files are the same' 00:09:21.856 + rm /tmp/62.jNt /tmp/spdk_tgt_config.json.r7g 00:09:21.856 + exit 0 00:09:21.856 INFO: changing configuration and checking if this can be detected... 00:09:21.856 09:19:59 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:09:21.856 09:19:59 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:09:21.856 09:19:59 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:21.856 09:19:59 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:22.114 09:19:59 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:22.114 09:19:59 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:09:22.114 09:19:59 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:22.114 + '[' 2 -ne 2 ']' 00:09:22.114 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:09:22.114 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:09:22.114 + rootdir=/home/vagrant/spdk_repo/spdk 00:09:22.114 +++ basename /dev/fd/62 00:09:22.114 ++ mktemp /tmp/62.XXX 00:09:22.114 + tmp_file_1=/tmp/62.dWq 00:09:22.114 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:22.114 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:22.114 + tmp_file_2=/tmp/spdk_tgt_config.json.Q7z 00:09:22.114 + ret=0 00:09:22.114 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:22.694 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:22.694 + diff -u /tmp/62.dWq /tmp/spdk_tgt_config.json.Q7z 00:09:22.694 + ret=1 00:09:22.694 + echo '=== Start of file: /tmp/62.dWq ===' 00:09:22.694 + cat /tmp/62.dWq 00:09:22.694 + echo '=== End of file: /tmp/62.dWq ===' 00:09:22.694 + echo '' 00:09:22.694 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Q7z ===' 00:09:22.694 + cat /tmp/spdk_tgt_config.json.Q7z 00:09:22.694 + echo '=== End of file: /tmp/spdk_tgt_config.json.Q7z ===' 00:09:22.694 + echo '' 00:09:22.694 + rm /tmp/62.dWq /tmp/spdk_tgt_config.json.Q7z 00:09:22.694 + exit 1 00:09:22.694 INFO: configuration change detected. 00:09:22.694 09:20:00 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:09:22.694 09:20:00 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:09:22.694 09:20:00 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:09:22.694 09:20:00 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:22.694 09:20:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:22.694 09:20:00 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:09:22.694 09:20:00 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:09:22.694 09:20:00 json_config -- json_config/json_config.sh@324 -- # [[ -n 57466 ]] 00:09:22.694 09:20:00 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:09:22.694 09:20:00 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:09:22.694 09:20:00 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:22.694 09:20:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:22.694 09:20:00 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:09:22.694 09:20:00 json_config -- json_config/json_config.sh@200 -- # uname -s 00:09:22.694 09:20:00 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:09:22.694 09:20:00 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:09:22.694 09:20:00 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:09:22.694 09:20:00 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:09:22.694 09:20:00 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:22.694 09:20:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:22.694 09:20:00 json_config -- json_config/json_config.sh@330 -- # killprocess 57466 00:09:22.694 09:20:00 json_config -- common/autotest_common.sh@954 -- # '[' -z 57466 ']' 00:09:22.694 09:20:00 json_config -- common/autotest_common.sh@958 -- # kill -0 57466 00:09:22.694 09:20:00 json_config -- common/autotest_common.sh@959 -- # uname 00:09:22.694 09:20:00 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:22.694 09:20:00 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57466 00:09:22.694 09:20:00 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:22.694 09:20:00 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:22.694 killing process with pid 57466 00:09:22.694 09:20:00 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57466' 00:09:22.694 09:20:00 json_config -- common/autotest_common.sh@973 -- # kill 57466 00:09:22.694 09:20:00 json_config -- common/autotest_common.sh@978 -- # wait 57466 00:09:22.952 09:20:00 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:22.952 09:20:00 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:09:22.952 09:20:00 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:22.952 09:20:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:22.952 INFO: Success 00:09:22.952 09:20:00 json_config -- json_config/json_config.sh@335 -- # return 0 00:09:22.952 09:20:00 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:09:22.952 00:09:22.952 real 0m8.084s 00:09:22.952 user 0m11.164s 00:09:22.952 sys 0m1.827s 00:09:22.952 ************************************ 00:09:22.952 END TEST json_config 00:09:22.952 ************************************ 00:09:22.952 09:20:00 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:22.952 09:20:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:22.952 09:20:00 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:22.952 09:20:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:22.952 09:20:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:22.952 09:20:00 -- common/autotest_common.sh@10 -- # set +x 00:09:23.240 ************************************ 00:09:23.240 START TEST json_config_extra_key 00:09:23.240 ************************************ 00:09:23.240 09:20:00 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:23.240 09:20:00 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:23.240 09:20:00 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:09:23.240 09:20:00 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:23.240 09:20:00 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:23.240 09:20:00 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:23.240 09:20:00 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:23.240 09:20:00 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:23.240 09:20:00 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:09:23.240 09:20:00 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:09:23.240 09:20:00 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:09:23.240 09:20:00 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:09:23.240 09:20:00 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:09:23.240 09:20:00 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:09:23.240 09:20:00 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:09:23.240 09:20:00 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:23.240 09:20:00 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:09:23.240 09:20:00 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:09:23.240 09:20:00 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:23.240 09:20:00 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:23.240 09:20:00 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:09:23.240 09:20:00 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:09:23.240 09:20:00 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:23.240 09:20:00 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:09:23.240 09:20:00 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:09:23.240 09:20:00 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:09:23.240 09:20:00 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:09:23.240 09:20:00 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:23.240 09:20:00 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:09:23.240 09:20:00 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:09:23.240 09:20:00 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:23.240 09:20:00 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:23.240 09:20:00 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:09:23.240 09:20:00 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:23.240 09:20:00 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:23.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.240 --rc genhtml_branch_coverage=1 00:09:23.240 --rc genhtml_function_coverage=1 00:09:23.240 --rc genhtml_legend=1 00:09:23.240 --rc geninfo_all_blocks=1 00:09:23.240 --rc geninfo_unexecuted_blocks=1 00:09:23.240 00:09:23.240 ' 00:09:23.240 09:20:00 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:23.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.240 --rc genhtml_branch_coverage=1 00:09:23.240 --rc genhtml_function_coverage=1 00:09:23.240 --rc genhtml_legend=1 00:09:23.240 --rc geninfo_all_blocks=1 00:09:23.240 --rc geninfo_unexecuted_blocks=1 00:09:23.240 00:09:23.240 ' 00:09:23.240 09:20:00 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:23.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.240 --rc genhtml_branch_coverage=1 00:09:23.240 --rc genhtml_function_coverage=1 00:09:23.240 --rc genhtml_legend=1 00:09:23.240 --rc geninfo_all_blocks=1 00:09:23.240 --rc geninfo_unexecuted_blocks=1 00:09:23.240 00:09:23.240 ' 00:09:23.240 09:20:00 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:23.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.240 --rc genhtml_branch_coverage=1 00:09:23.240 --rc genhtml_function_coverage=1 00:09:23.240 --rc genhtml_legend=1 00:09:23.240 --rc geninfo_all_blocks=1 00:09:23.240 --rc geninfo_unexecuted_blocks=1 00:09:23.240 00:09:23.240 ' 00:09:23.240 09:20:00 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:23.240 09:20:00 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:09:23.240 09:20:00 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:23.240 09:20:00 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:23.240 09:20:00 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:23.240 09:20:00 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:23.240 09:20:00 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:23.240 09:20:00 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:23.240 09:20:00 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:23.240 09:20:00 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:23.240 09:20:00 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:23.240 09:20:00 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:23.240 09:20:00 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:09:23.240 09:20:00 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:09:23.240 09:20:00 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:23.240 09:20:00 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:23.240 09:20:00 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:23.240 09:20:00 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:23.241 09:20:00 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:23.241 09:20:00 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:09:23.241 09:20:00 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:23.241 09:20:00 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:23.241 09:20:00 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:23.241 09:20:00 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.241 09:20:00 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.241 09:20:00 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.241 09:20:00 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:09:23.241 09:20:00 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.241 09:20:00 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:09:23.241 09:20:00 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:23.241 09:20:00 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:23.241 09:20:00 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:23.241 09:20:00 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:23.241 09:20:00 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:23.241 09:20:00 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:23.241 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:23.241 09:20:00 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:23.241 09:20:00 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:23.241 09:20:00 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:23.241 09:20:00 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:09:23.241 09:20:00 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:09:23.241 09:20:00 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:09:23.241 09:20:00 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:09:23.241 09:20:00 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:09:23.241 09:20:00 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:09:23.241 09:20:00 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:09:23.241 09:20:00 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:09:23.241 09:20:00 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:09:23.241 09:20:00 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:23.241 09:20:00 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:09:23.241 INFO: launching applications... 00:09:23.241 09:20:00 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:23.241 09:20:00 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:09:23.241 09:20:00 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:09:23.241 09:20:00 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:23.241 09:20:00 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:23.241 09:20:00 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:09:23.241 09:20:00 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:23.241 09:20:00 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:23.241 09:20:00 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57620 00:09:23.241 09:20:00 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:23.241 Waiting for target to run... 00:09:23.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:23.241 09:20:00 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57620 /var/tmp/spdk_tgt.sock 00:09:23.241 09:20:00 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:23.241 09:20:00 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57620 ']' 00:09:23.241 09:20:00 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:23.241 09:20:00 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:23.241 09:20:00 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:23.241 09:20:00 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:23.241 09:20:00 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:23.502 [2024-12-09 09:20:00.976287] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:09:23.502 [2024-12-09 09:20:00.976377] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57620 ] 00:09:23.761 [2024-12-09 09:20:01.349930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.761 [2024-12-09 09:20:01.396715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.761 [2024-12-09 09:20:01.427230] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:24.328 00:09:24.328 INFO: shutting down applications... 00:09:24.328 09:20:01 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:24.328 09:20:01 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:09:24.328 09:20:01 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:09:24.328 09:20:01 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:09:24.328 09:20:01 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:09:24.328 09:20:01 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:09:24.328 09:20:01 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:24.328 09:20:01 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57620 ]] 00:09:24.328 09:20:01 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57620 00:09:24.328 09:20:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:24.328 09:20:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:24.328 09:20:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57620 00:09:24.328 09:20:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:24.894 09:20:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:24.894 09:20:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:24.894 09:20:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57620 00:09:24.894 09:20:02 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:24.894 09:20:02 json_config_extra_key -- json_config/common.sh@43 -- # break 00:09:24.894 09:20:02 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:24.894 09:20:02 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:24.894 SPDK target shutdown done 00:09:24.894 09:20:02 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:09:24.894 Success 00:09:24.894 00:09:24.894 real 0m1.685s 00:09:24.894 user 0m1.396s 00:09:24.894 sys 0m0.427s 00:09:24.894 ************************************ 00:09:24.894 END TEST json_config_extra_key 00:09:24.894 ************************************ 00:09:24.894 09:20:02 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.894 09:20:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:24.894 09:20:02 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:24.894 09:20:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:24.894 09:20:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.894 09:20:02 -- common/autotest_common.sh@10 -- # set +x 00:09:24.894 ************************************ 00:09:24.894 START TEST alias_rpc 00:09:24.894 ************************************ 00:09:24.894 09:20:02 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:24.894 * Looking for test storage... 00:09:24.894 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:09:24.894 09:20:02 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:24.894 09:20:02 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:24.894 09:20:02 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:25.153 09:20:02 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:25.153 09:20:02 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:25.153 09:20:02 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:25.153 09:20:02 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:25.153 09:20:02 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:25.153 09:20:02 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:25.153 09:20:02 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:25.153 09:20:02 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:25.153 09:20:02 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:25.153 09:20:02 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:25.153 09:20:02 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:25.153 09:20:02 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:25.153 09:20:02 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:25.153 09:20:02 alias_rpc -- scripts/common.sh@345 -- # : 1 00:09:25.153 09:20:02 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:25.153 09:20:02 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:25.153 09:20:02 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:25.153 09:20:02 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:09:25.153 09:20:02 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:25.153 09:20:02 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:09:25.153 09:20:02 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:25.153 09:20:02 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:25.153 09:20:02 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:09:25.153 09:20:02 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:25.153 09:20:02 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:09:25.153 09:20:02 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:25.153 09:20:02 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:25.153 09:20:02 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:25.153 09:20:02 alias_rpc -- scripts/common.sh@368 -- # return 0 00:09:25.153 09:20:02 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:25.153 09:20:02 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:25.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.153 --rc genhtml_branch_coverage=1 00:09:25.153 --rc genhtml_function_coverage=1 00:09:25.153 --rc genhtml_legend=1 00:09:25.153 --rc geninfo_all_blocks=1 00:09:25.153 --rc geninfo_unexecuted_blocks=1 00:09:25.153 00:09:25.153 ' 00:09:25.153 09:20:02 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:25.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.153 --rc genhtml_branch_coverage=1 00:09:25.153 --rc genhtml_function_coverage=1 00:09:25.153 --rc genhtml_legend=1 00:09:25.153 --rc geninfo_all_blocks=1 00:09:25.153 --rc geninfo_unexecuted_blocks=1 00:09:25.153 00:09:25.153 ' 00:09:25.153 09:20:02 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:25.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.154 --rc genhtml_branch_coverage=1 00:09:25.154 --rc genhtml_function_coverage=1 00:09:25.154 --rc genhtml_legend=1 00:09:25.154 --rc geninfo_all_blocks=1 00:09:25.154 --rc geninfo_unexecuted_blocks=1 00:09:25.154 00:09:25.154 ' 00:09:25.154 09:20:02 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:25.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.154 --rc genhtml_branch_coverage=1 00:09:25.154 --rc genhtml_function_coverage=1 00:09:25.154 --rc genhtml_legend=1 00:09:25.154 --rc geninfo_all_blocks=1 00:09:25.154 --rc geninfo_unexecuted_blocks=1 00:09:25.154 00:09:25.154 ' 00:09:25.154 09:20:02 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:25.154 09:20:02 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57698 00:09:25.154 09:20:02 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:25.154 09:20:02 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57698 00:09:25.154 09:20:02 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57698 ']' 00:09:25.154 09:20:02 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.154 09:20:02 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:25.154 09:20:02 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.154 09:20:02 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:25.154 09:20:02 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:25.154 [2024-12-09 09:20:02.721005] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:09:25.154 [2024-12-09 09:20:02.721275] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57698 ] 00:09:25.154 [2024-12-09 09:20:02.870676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.413 [2024-12-09 09:20:02.927145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.413 [2024-12-09 09:20:02.987953] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:25.982 09:20:03 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:25.982 09:20:03 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:25.982 09:20:03 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:09:26.242 09:20:03 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57698 00:09:26.242 09:20:03 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57698 ']' 00:09:26.242 09:20:03 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57698 00:09:26.242 09:20:03 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:09:26.242 09:20:03 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:26.242 09:20:03 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57698 00:09:26.242 killing process with pid 57698 00:09:26.242 09:20:03 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:26.242 09:20:03 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:26.242 09:20:03 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57698' 00:09:26.242 09:20:03 alias_rpc -- common/autotest_common.sh@973 -- # kill 57698 00:09:26.242 09:20:03 alias_rpc -- common/autotest_common.sh@978 -- # wait 57698 00:09:26.502 ************************************ 00:09:26.502 END TEST alias_rpc 00:09:26.502 ************************************ 00:09:26.502 00:09:26.502 real 0m1.760s 00:09:26.502 user 0m1.867s 00:09:26.502 sys 0m0.460s 00:09:26.502 09:20:04 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.502 09:20:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:26.762 09:20:04 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:09:26.762 09:20:04 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:09:26.762 09:20:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:26.762 09:20:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.762 09:20:04 -- common/autotest_common.sh@10 -- # set +x 00:09:26.762 ************************************ 00:09:26.762 START TEST spdkcli_tcp 00:09:26.762 ************************************ 00:09:26.762 09:20:04 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:09:26.762 * Looking for test storage... 00:09:26.762 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:09:26.762 09:20:04 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:26.762 09:20:04 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:09:26.762 09:20:04 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:26.762 09:20:04 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:26.763 09:20:04 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:26.763 09:20:04 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:26.763 09:20:04 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:26.763 09:20:04 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:09:26.763 09:20:04 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:09:26.763 09:20:04 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:09:26.763 09:20:04 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:09:26.763 09:20:04 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:09:26.763 09:20:04 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:09:26.763 09:20:04 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:09:26.763 09:20:04 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:26.763 09:20:04 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:09:26.763 09:20:04 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:09:26.763 09:20:04 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:26.763 09:20:04 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:27.023 09:20:04 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:09:27.023 09:20:04 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:09:27.023 09:20:04 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:27.023 09:20:04 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:09:27.023 09:20:04 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:27.023 09:20:04 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:09:27.023 09:20:04 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:09:27.023 09:20:04 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:27.023 09:20:04 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:09:27.023 09:20:04 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:27.023 09:20:04 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:27.023 09:20:04 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:27.023 09:20:04 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:09:27.023 09:20:04 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:27.023 09:20:04 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:27.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.023 --rc genhtml_branch_coverage=1 00:09:27.023 --rc genhtml_function_coverage=1 00:09:27.023 --rc genhtml_legend=1 00:09:27.023 --rc geninfo_all_blocks=1 00:09:27.023 --rc geninfo_unexecuted_blocks=1 00:09:27.023 00:09:27.023 ' 00:09:27.023 09:20:04 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:27.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.023 --rc genhtml_branch_coverage=1 00:09:27.023 --rc genhtml_function_coverage=1 00:09:27.023 --rc genhtml_legend=1 00:09:27.023 --rc geninfo_all_blocks=1 00:09:27.023 --rc geninfo_unexecuted_blocks=1 00:09:27.023 00:09:27.023 ' 00:09:27.023 09:20:04 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:27.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.023 --rc genhtml_branch_coverage=1 00:09:27.023 --rc genhtml_function_coverage=1 00:09:27.023 --rc genhtml_legend=1 00:09:27.023 --rc geninfo_all_blocks=1 00:09:27.023 --rc geninfo_unexecuted_blocks=1 00:09:27.023 00:09:27.023 ' 00:09:27.023 09:20:04 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:27.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.023 --rc genhtml_branch_coverage=1 00:09:27.023 --rc genhtml_function_coverage=1 00:09:27.023 --rc genhtml_legend=1 00:09:27.023 --rc geninfo_all_blocks=1 00:09:27.023 --rc geninfo_unexecuted_blocks=1 00:09:27.023 00:09:27.023 ' 00:09:27.023 09:20:04 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:09:27.023 09:20:04 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:09:27.023 09:20:04 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:09:27.023 09:20:04 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:09:27.023 09:20:04 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:09:27.023 09:20:04 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:09:27.023 09:20:04 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:09:27.023 09:20:04 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:27.023 09:20:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:27.023 09:20:04 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57776 00:09:27.023 09:20:04 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:09:27.023 09:20:04 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57776 00:09:27.023 09:20:04 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57776 ']' 00:09:27.023 09:20:04 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.023 09:20:04 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:27.023 09:20:04 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.023 09:20:04 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:27.023 09:20:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:27.023 [2024-12-09 09:20:04.566174] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:09:27.023 [2024-12-09 09:20:04.566249] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57776 ] 00:09:27.023 [2024-12-09 09:20:04.717320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:27.282 [2024-12-09 09:20:04.775389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.282 [2024-12-09 09:20:04.775384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:27.282 [2024-12-09 09:20:04.835854] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:27.847 09:20:05 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:27.847 09:20:05 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:09:27.847 09:20:05 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57788 00:09:27.847 09:20:05 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:09:27.847 09:20:05 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:09:28.105 [ 00:09:28.105 "bdev_malloc_delete", 00:09:28.105 "bdev_malloc_create", 00:09:28.105 "bdev_null_resize", 00:09:28.105 "bdev_null_delete", 00:09:28.105 "bdev_null_create", 00:09:28.105 "bdev_nvme_cuse_unregister", 00:09:28.105 "bdev_nvme_cuse_register", 00:09:28.105 "bdev_opal_new_user", 00:09:28.105 "bdev_opal_set_lock_state", 00:09:28.105 "bdev_opal_delete", 00:09:28.105 "bdev_opal_get_info", 00:09:28.105 "bdev_opal_create", 00:09:28.105 "bdev_nvme_opal_revert", 00:09:28.105 "bdev_nvme_opal_init", 00:09:28.105 "bdev_nvme_send_cmd", 00:09:28.105 "bdev_nvme_set_keys", 00:09:28.105 "bdev_nvme_get_path_iostat", 00:09:28.105 "bdev_nvme_get_mdns_discovery_info", 00:09:28.105 "bdev_nvme_stop_mdns_discovery", 00:09:28.105 "bdev_nvme_start_mdns_discovery", 00:09:28.105 "bdev_nvme_set_multipath_policy", 00:09:28.105 "bdev_nvme_set_preferred_path", 00:09:28.105 "bdev_nvme_get_io_paths", 00:09:28.105 "bdev_nvme_remove_error_injection", 00:09:28.105 "bdev_nvme_add_error_injection", 00:09:28.105 "bdev_nvme_get_discovery_info", 00:09:28.105 "bdev_nvme_stop_discovery", 00:09:28.105 "bdev_nvme_start_discovery", 00:09:28.105 "bdev_nvme_get_controller_health_info", 00:09:28.105 "bdev_nvme_disable_controller", 00:09:28.105 "bdev_nvme_enable_controller", 00:09:28.105 "bdev_nvme_reset_controller", 00:09:28.105 "bdev_nvme_get_transport_statistics", 00:09:28.105 "bdev_nvme_apply_firmware", 00:09:28.105 "bdev_nvme_detach_controller", 00:09:28.105 "bdev_nvme_get_controllers", 00:09:28.105 "bdev_nvme_attach_controller", 00:09:28.105 "bdev_nvme_set_hotplug", 00:09:28.105 "bdev_nvme_set_options", 00:09:28.105 "bdev_passthru_delete", 00:09:28.105 "bdev_passthru_create", 00:09:28.105 "bdev_lvol_set_parent_bdev", 00:09:28.105 "bdev_lvol_set_parent", 00:09:28.105 "bdev_lvol_check_shallow_copy", 00:09:28.105 "bdev_lvol_start_shallow_copy", 00:09:28.105 "bdev_lvol_grow_lvstore", 00:09:28.105 "bdev_lvol_get_lvols", 00:09:28.105 "bdev_lvol_get_lvstores", 00:09:28.105 "bdev_lvol_delete", 00:09:28.105 "bdev_lvol_set_read_only", 00:09:28.105 "bdev_lvol_resize", 00:09:28.105 "bdev_lvol_decouple_parent", 00:09:28.105 "bdev_lvol_inflate", 00:09:28.105 "bdev_lvol_rename", 00:09:28.105 "bdev_lvol_clone_bdev", 00:09:28.105 "bdev_lvol_clone", 00:09:28.105 "bdev_lvol_snapshot", 00:09:28.105 "bdev_lvol_create", 00:09:28.105 "bdev_lvol_delete_lvstore", 00:09:28.105 "bdev_lvol_rename_lvstore", 00:09:28.105 "bdev_lvol_create_lvstore", 00:09:28.105 "bdev_raid_set_options", 00:09:28.105 "bdev_raid_remove_base_bdev", 00:09:28.105 "bdev_raid_add_base_bdev", 00:09:28.105 "bdev_raid_delete", 00:09:28.105 "bdev_raid_create", 00:09:28.105 "bdev_raid_get_bdevs", 00:09:28.105 "bdev_error_inject_error", 00:09:28.105 "bdev_error_delete", 00:09:28.105 "bdev_error_create", 00:09:28.105 "bdev_split_delete", 00:09:28.105 "bdev_split_create", 00:09:28.105 "bdev_delay_delete", 00:09:28.105 "bdev_delay_create", 00:09:28.105 "bdev_delay_update_latency", 00:09:28.105 "bdev_zone_block_delete", 00:09:28.105 "bdev_zone_block_create", 00:09:28.105 "blobfs_create", 00:09:28.105 "blobfs_detect", 00:09:28.105 "blobfs_set_cache_size", 00:09:28.105 "bdev_aio_delete", 00:09:28.105 "bdev_aio_rescan", 00:09:28.105 "bdev_aio_create", 00:09:28.105 "bdev_ftl_set_property", 00:09:28.105 "bdev_ftl_get_properties", 00:09:28.105 "bdev_ftl_get_stats", 00:09:28.105 "bdev_ftl_unmap", 00:09:28.105 "bdev_ftl_unload", 00:09:28.105 "bdev_ftl_delete", 00:09:28.105 "bdev_ftl_load", 00:09:28.105 "bdev_ftl_create", 00:09:28.105 "bdev_virtio_attach_controller", 00:09:28.105 "bdev_virtio_scsi_get_devices", 00:09:28.105 "bdev_virtio_detach_controller", 00:09:28.105 "bdev_virtio_blk_set_hotplug", 00:09:28.105 "bdev_iscsi_delete", 00:09:28.105 "bdev_iscsi_create", 00:09:28.105 "bdev_iscsi_set_options", 00:09:28.105 "bdev_uring_delete", 00:09:28.105 "bdev_uring_rescan", 00:09:28.105 "bdev_uring_create", 00:09:28.105 "accel_error_inject_error", 00:09:28.105 "ioat_scan_accel_module", 00:09:28.105 "dsa_scan_accel_module", 00:09:28.105 "iaa_scan_accel_module", 00:09:28.105 "keyring_file_remove_key", 00:09:28.105 "keyring_file_add_key", 00:09:28.105 "keyring_linux_set_options", 00:09:28.105 "fsdev_aio_delete", 00:09:28.105 "fsdev_aio_create", 00:09:28.105 "iscsi_get_histogram", 00:09:28.105 "iscsi_enable_histogram", 00:09:28.105 "iscsi_set_options", 00:09:28.105 "iscsi_get_auth_groups", 00:09:28.105 "iscsi_auth_group_remove_secret", 00:09:28.105 "iscsi_auth_group_add_secret", 00:09:28.105 "iscsi_delete_auth_group", 00:09:28.105 "iscsi_create_auth_group", 00:09:28.105 "iscsi_set_discovery_auth", 00:09:28.105 "iscsi_get_options", 00:09:28.105 "iscsi_target_node_request_logout", 00:09:28.105 "iscsi_target_node_set_redirect", 00:09:28.105 "iscsi_target_node_set_auth", 00:09:28.105 "iscsi_target_node_add_lun", 00:09:28.105 "iscsi_get_stats", 00:09:28.105 "iscsi_get_connections", 00:09:28.105 "iscsi_portal_group_set_auth", 00:09:28.105 "iscsi_start_portal_group", 00:09:28.105 "iscsi_delete_portal_group", 00:09:28.105 "iscsi_create_portal_group", 00:09:28.105 "iscsi_get_portal_groups", 00:09:28.105 "iscsi_delete_target_node", 00:09:28.105 "iscsi_target_node_remove_pg_ig_maps", 00:09:28.105 "iscsi_target_node_add_pg_ig_maps", 00:09:28.105 "iscsi_create_target_node", 00:09:28.105 "iscsi_get_target_nodes", 00:09:28.105 "iscsi_delete_initiator_group", 00:09:28.105 "iscsi_initiator_group_remove_initiators", 00:09:28.105 "iscsi_initiator_group_add_initiators", 00:09:28.105 "iscsi_create_initiator_group", 00:09:28.105 "iscsi_get_initiator_groups", 00:09:28.105 "nvmf_set_crdt", 00:09:28.105 "nvmf_set_config", 00:09:28.105 "nvmf_set_max_subsystems", 00:09:28.105 "nvmf_stop_mdns_prr", 00:09:28.105 "nvmf_publish_mdns_prr", 00:09:28.105 "nvmf_subsystem_get_listeners", 00:09:28.105 "nvmf_subsystem_get_qpairs", 00:09:28.105 "nvmf_subsystem_get_controllers", 00:09:28.105 "nvmf_get_stats", 00:09:28.105 "nvmf_get_transports", 00:09:28.105 "nvmf_create_transport", 00:09:28.105 "nvmf_get_targets", 00:09:28.105 "nvmf_delete_target", 00:09:28.105 "nvmf_create_target", 00:09:28.105 "nvmf_subsystem_allow_any_host", 00:09:28.105 "nvmf_subsystem_set_keys", 00:09:28.105 "nvmf_subsystem_remove_host", 00:09:28.105 "nvmf_subsystem_add_host", 00:09:28.105 "nvmf_ns_remove_host", 00:09:28.105 "nvmf_ns_add_host", 00:09:28.105 "nvmf_subsystem_remove_ns", 00:09:28.105 "nvmf_subsystem_set_ns_ana_group", 00:09:28.105 "nvmf_subsystem_add_ns", 00:09:28.105 "nvmf_subsystem_listener_set_ana_state", 00:09:28.105 "nvmf_discovery_get_referrals", 00:09:28.105 "nvmf_discovery_remove_referral", 00:09:28.105 "nvmf_discovery_add_referral", 00:09:28.105 "nvmf_subsystem_remove_listener", 00:09:28.105 "nvmf_subsystem_add_listener", 00:09:28.105 "nvmf_delete_subsystem", 00:09:28.105 "nvmf_create_subsystem", 00:09:28.105 "nvmf_get_subsystems", 00:09:28.105 "env_dpdk_get_mem_stats", 00:09:28.105 "nbd_get_disks", 00:09:28.105 "nbd_stop_disk", 00:09:28.105 "nbd_start_disk", 00:09:28.105 "ublk_recover_disk", 00:09:28.105 "ublk_get_disks", 00:09:28.105 "ublk_stop_disk", 00:09:28.105 "ublk_start_disk", 00:09:28.105 "ublk_destroy_target", 00:09:28.105 "ublk_create_target", 00:09:28.105 "virtio_blk_create_transport", 00:09:28.105 "virtio_blk_get_transports", 00:09:28.105 "vhost_controller_set_coalescing", 00:09:28.105 "vhost_get_controllers", 00:09:28.105 "vhost_delete_controller", 00:09:28.105 "vhost_create_blk_controller", 00:09:28.105 "vhost_scsi_controller_remove_target", 00:09:28.105 "vhost_scsi_controller_add_target", 00:09:28.105 "vhost_start_scsi_controller", 00:09:28.105 "vhost_create_scsi_controller", 00:09:28.105 "thread_set_cpumask", 00:09:28.105 "scheduler_set_options", 00:09:28.105 "framework_get_governor", 00:09:28.105 "framework_get_scheduler", 00:09:28.105 "framework_set_scheduler", 00:09:28.105 "framework_get_reactors", 00:09:28.105 "thread_get_io_channels", 00:09:28.105 "thread_get_pollers", 00:09:28.105 "thread_get_stats", 00:09:28.105 "framework_monitor_context_switch", 00:09:28.105 "spdk_kill_instance", 00:09:28.105 "log_enable_timestamps", 00:09:28.105 "log_get_flags", 00:09:28.105 "log_clear_flag", 00:09:28.105 "log_set_flag", 00:09:28.105 "log_get_level", 00:09:28.105 "log_set_level", 00:09:28.105 "log_get_print_level", 00:09:28.105 "log_set_print_level", 00:09:28.105 "framework_enable_cpumask_locks", 00:09:28.105 "framework_disable_cpumask_locks", 00:09:28.105 "framework_wait_init", 00:09:28.105 "framework_start_init", 00:09:28.105 "scsi_get_devices", 00:09:28.105 "bdev_get_histogram", 00:09:28.105 "bdev_enable_histogram", 00:09:28.105 "bdev_set_qos_limit", 00:09:28.105 "bdev_set_qd_sampling_period", 00:09:28.105 "bdev_get_bdevs", 00:09:28.105 "bdev_reset_iostat", 00:09:28.105 "bdev_get_iostat", 00:09:28.105 "bdev_examine", 00:09:28.105 "bdev_wait_for_examine", 00:09:28.105 "bdev_set_options", 00:09:28.105 "accel_get_stats", 00:09:28.105 "accel_set_options", 00:09:28.106 "accel_set_driver", 00:09:28.106 "accel_crypto_key_destroy", 00:09:28.106 "accel_crypto_keys_get", 00:09:28.106 "accel_crypto_key_create", 00:09:28.106 "accel_assign_opc", 00:09:28.106 "accel_get_module_info", 00:09:28.106 "accel_get_opc_assignments", 00:09:28.106 "vmd_rescan", 00:09:28.106 "vmd_remove_device", 00:09:28.106 "vmd_enable", 00:09:28.106 "sock_get_default_impl", 00:09:28.106 "sock_set_default_impl", 00:09:28.106 "sock_impl_set_options", 00:09:28.106 "sock_impl_get_options", 00:09:28.106 "iobuf_get_stats", 00:09:28.106 "iobuf_set_options", 00:09:28.106 "keyring_get_keys", 00:09:28.106 "framework_get_pci_devices", 00:09:28.106 "framework_get_config", 00:09:28.106 "framework_get_subsystems", 00:09:28.106 "fsdev_set_opts", 00:09:28.106 "fsdev_get_opts", 00:09:28.106 "trace_get_info", 00:09:28.106 "trace_get_tpoint_group_mask", 00:09:28.106 "trace_disable_tpoint_group", 00:09:28.106 "trace_enable_tpoint_group", 00:09:28.106 "trace_clear_tpoint_mask", 00:09:28.106 "trace_set_tpoint_mask", 00:09:28.106 "notify_get_notifications", 00:09:28.106 "notify_get_types", 00:09:28.106 "spdk_get_version", 00:09:28.106 "rpc_get_methods" 00:09:28.106 ] 00:09:28.106 09:20:05 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:09:28.106 09:20:05 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:28.106 09:20:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:28.106 09:20:05 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:28.106 09:20:05 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57776 00:09:28.106 09:20:05 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57776 ']' 00:09:28.106 09:20:05 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57776 00:09:28.106 09:20:05 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:09:28.106 09:20:05 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:28.106 09:20:05 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57776 00:09:28.106 09:20:05 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:28.106 09:20:05 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:28.106 killing process with pid 57776 00:09:28.106 09:20:05 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57776' 00:09:28.106 09:20:05 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57776 00:09:28.106 09:20:05 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57776 00:09:28.690 00:09:28.690 real 0m1.833s 00:09:28.690 user 0m3.187s 00:09:28.690 sys 0m0.552s 00:09:28.690 09:20:06 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:28.690 ************************************ 00:09:28.690 END TEST spdkcli_tcp 00:09:28.690 09:20:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:28.690 ************************************ 00:09:28.690 09:20:06 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:28.690 09:20:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:28.690 09:20:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:28.690 09:20:06 -- common/autotest_common.sh@10 -- # set +x 00:09:28.690 ************************************ 00:09:28.690 START TEST dpdk_mem_utility 00:09:28.690 ************************************ 00:09:28.690 09:20:06 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:28.690 * Looking for test storage... 00:09:28.690 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:09:28.690 09:20:06 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:28.690 09:20:06 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:28.690 09:20:06 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:09:28.690 09:20:06 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:28.690 09:20:06 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:28.690 09:20:06 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:28.690 09:20:06 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:28.690 09:20:06 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:09:28.690 09:20:06 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:09:28.691 09:20:06 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:09:28.691 09:20:06 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:09:28.691 09:20:06 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:09:28.691 09:20:06 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:09:28.691 09:20:06 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:09:28.691 09:20:06 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:28.691 09:20:06 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:09:28.691 09:20:06 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:09:28.691 09:20:06 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:28.691 09:20:06 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:28.691 09:20:06 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:09:28.691 09:20:06 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:09:28.691 09:20:06 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:28.691 09:20:06 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:09:28.691 09:20:06 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:09:28.691 09:20:06 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:09:28.691 09:20:06 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:09:28.691 09:20:06 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:28.691 09:20:06 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:09:28.691 09:20:06 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:09:28.691 09:20:06 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:28.691 09:20:06 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:28.691 09:20:06 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:09:28.691 09:20:06 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:28.691 09:20:06 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:28.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.691 --rc genhtml_branch_coverage=1 00:09:28.691 --rc genhtml_function_coverage=1 00:09:28.691 --rc genhtml_legend=1 00:09:28.691 --rc geninfo_all_blocks=1 00:09:28.691 --rc geninfo_unexecuted_blocks=1 00:09:28.691 00:09:28.691 ' 00:09:28.691 09:20:06 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:28.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.691 --rc genhtml_branch_coverage=1 00:09:28.691 --rc genhtml_function_coverage=1 00:09:28.691 --rc genhtml_legend=1 00:09:28.691 --rc geninfo_all_blocks=1 00:09:28.691 --rc geninfo_unexecuted_blocks=1 00:09:28.691 00:09:28.691 ' 00:09:28.691 09:20:06 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:28.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.691 --rc genhtml_branch_coverage=1 00:09:28.691 --rc genhtml_function_coverage=1 00:09:28.691 --rc genhtml_legend=1 00:09:28.691 --rc geninfo_all_blocks=1 00:09:28.691 --rc geninfo_unexecuted_blocks=1 00:09:28.691 00:09:28.691 ' 00:09:28.691 09:20:06 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:28.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.691 --rc genhtml_branch_coverage=1 00:09:28.691 --rc genhtml_function_coverage=1 00:09:28.691 --rc genhtml_legend=1 00:09:28.691 --rc geninfo_all_blocks=1 00:09:28.691 --rc geninfo_unexecuted_blocks=1 00:09:28.691 00:09:28.691 ' 00:09:28.691 09:20:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:28.691 09:20:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57870 00:09:28.691 09:20:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:28.691 09:20:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57870 00:09:28.691 09:20:06 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57870 ']' 00:09:28.691 09:20:06 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.691 09:20:06 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:28.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.691 09:20:06 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.691 09:20:06 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:28.691 09:20:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:28.985 [2024-12-09 09:20:06.452001] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:09:28.985 [2024-12-09 09:20:06.452080] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57870 ] 00:09:28.985 [2024-12-09 09:20:06.608670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.985 [2024-12-09 09:20:06.667135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.271 [2024-12-09 09:20:06.730391] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:29.840 09:20:07 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:29.840 09:20:07 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:09:29.840 09:20:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:09:29.840 09:20:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:09:29.840 09:20:07 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.840 09:20:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:29.840 { 00:09:29.840 "filename": "/tmp/spdk_mem_dump.txt" 00:09:29.840 } 00:09:29.840 09:20:07 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.840 09:20:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:29.840 DPDK memory size 818.000000 MiB in 1 heap(s) 00:09:29.840 1 heaps totaling size 818.000000 MiB 00:09:29.840 size: 818.000000 MiB heap id: 0 00:09:29.840 end heaps---------- 00:09:29.840 9 mempools totaling size 603.782043 MiB 00:09:29.840 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:09:29.840 size: 158.602051 MiB name: PDU_data_out_Pool 00:09:29.840 size: 100.555481 MiB name: bdev_io_57870 00:09:29.840 size: 50.003479 MiB name: msgpool_57870 00:09:29.840 size: 36.509338 MiB name: fsdev_io_57870 00:09:29.840 size: 21.763794 MiB name: PDU_Pool 00:09:29.840 size: 19.513306 MiB name: SCSI_TASK_Pool 00:09:29.840 size: 4.133484 MiB name: evtpool_57870 00:09:29.840 size: 0.026123 MiB name: Session_Pool 00:09:29.840 end mempools------- 00:09:29.840 6 memzones totaling size 4.142822 MiB 00:09:29.840 size: 1.000366 MiB name: RG_ring_0_57870 00:09:29.840 size: 1.000366 MiB name: RG_ring_1_57870 00:09:29.840 size: 1.000366 MiB name: RG_ring_4_57870 00:09:29.840 size: 1.000366 MiB name: RG_ring_5_57870 00:09:29.840 size: 0.125366 MiB name: RG_ring_2_57870 00:09:29.840 size: 0.015991 MiB name: RG_ring_3_57870 00:09:29.840 end memzones------- 00:09:29.840 09:20:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:09:29.840 heap id: 0 total size: 818.000000 MiB number of busy elements: 319 number of free elements: 15 00:09:29.840 list of free elements. size: 10.802124 MiB 00:09:29.840 element at address: 0x200019200000 with size: 0.999878 MiB 00:09:29.840 element at address: 0x200019400000 with size: 0.999878 MiB 00:09:29.840 element at address: 0x200032000000 with size: 0.994446 MiB 00:09:29.840 element at address: 0x200000400000 with size: 0.993958 MiB 00:09:29.840 element at address: 0x200006400000 with size: 0.959839 MiB 00:09:29.840 element at address: 0x200012c00000 with size: 0.944275 MiB 00:09:29.840 element at address: 0x200019600000 with size: 0.936584 MiB 00:09:29.840 element at address: 0x200000200000 with size: 0.717346 MiB 00:09:29.840 element at address: 0x20001ae00000 with size: 0.566406 MiB 00:09:29.840 element at address: 0x20000a600000 with size: 0.488892 MiB 00:09:29.840 element at address: 0x200000c00000 with size: 0.486267 MiB 00:09:29.840 element at address: 0x200019800000 with size: 0.485657 MiB 00:09:29.840 element at address: 0x200003e00000 with size: 0.480286 MiB 00:09:29.840 element at address: 0x200028200000 with size: 0.396667 MiB 00:09:29.840 element at address: 0x200000800000 with size: 0.351746 MiB 00:09:29.840 list of standard malloc elements. size: 199.268982 MiB 00:09:29.840 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:09:29.840 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:09:29.840 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:09:29.840 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:09:29.840 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:09:29.840 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:09:29.840 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:09:29.840 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:09:29.840 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:09:29.840 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:09:29.840 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:09:29.840 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:09:29.840 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:09:29.840 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:09:29.840 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:09:29.840 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:09:29.840 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:09:29.840 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:09:29.841 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:09:29.841 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:09:29.841 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:09:29.841 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:09:29.841 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:09:29.841 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:09:29.841 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:09:29.841 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:09:29.841 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:09:29.841 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:09:29.841 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:09:29.841 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:09:29.841 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:09:29.841 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:09:29.841 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:09:29.841 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:09:29.841 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:09:29.841 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:09:29.841 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:09:29.841 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:09:29.841 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:09:29.841 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:09:29.841 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:09:29.841 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:09:29.841 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:09:29.841 element at address: 0x20000085e580 with size: 0.000183 MiB 00:09:29.841 element at address: 0x20000087e840 with size: 0.000183 MiB 00:09:29.841 element at address: 0x20000087e900 with size: 0.000183 MiB 00:09:29.841 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:09:29.841 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:09:29.841 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:09:29.841 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:09:29.841 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:09:29.841 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:09:29.841 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:09:29.841 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:09:29.841 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:09:29.841 element at address: 0x20000087f080 with size: 0.000183 MiB 00:09:29.841 element at address: 0x20000087f140 with size: 0.000183 MiB 00:09:29.841 element at address: 0x20000087f200 with size: 0.000183 MiB 00:09:29.841 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:09:29.841 element at address: 0x20000087f380 with size: 0.000183 MiB 00:09:29.841 element at address: 0x20000087f440 with size: 0.000183 MiB 00:09:29.841 element at address: 0x20000087f500 with size: 0.000183 MiB 00:09:29.841 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:09:29.841 element at address: 0x20000087f680 with size: 0.000183 MiB 00:09:29.841 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:09:29.841 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200000c7c7c0 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200000c7c880 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200000c7c940 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200000c7ca00 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200000cff000 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:09:29.841 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:09:29.842 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:09:29.842 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:09:29.842 element at address: 0x200003efb980 with size: 0.000183 MiB 00:09:29.842 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:09:29.842 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:09:29.842 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:09:29.842 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:09:29.842 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae91000 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae910c0 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae91180 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae91240 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae91300 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae913c0 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae91480 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae91540 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae91600 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae916c0 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae91780 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae91840 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae91900 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae919c0 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae91a80 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae91b40 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae91c00 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae91cc0 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae91d80 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae91e40 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae91f00 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae91fc0 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae92080 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae92140 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae92200 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae922c0 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae92380 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae92440 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:09:29.842 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:09:29.843 element at address: 0x2000282658c0 with size: 0.000183 MiB 00:09:29.843 element at address: 0x200028265980 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826c580 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826c780 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826c840 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826c900 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826d080 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826d140 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826d200 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826d380 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826d440 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826d500 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826d680 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826d740 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826d800 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826d980 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826da40 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826db00 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826de00 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826df80 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826e040 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826e100 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826e280 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826e340 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826e400 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826e580 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826e640 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826e700 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826e880 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826e940 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826f000 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826f180 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826f240 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826f300 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826f480 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826f540 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826f600 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826f780 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826f840 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826f900 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:09:29.843 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:09:29.843 list of memzone associated elements. size: 607.928894 MiB 00:09:29.843 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:09:29.843 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:09:29.843 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:09:29.843 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:09:29.843 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:09:29.843 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_57870_0 00:09:29.843 element at address: 0x200000dff380 with size: 48.003052 MiB 00:09:29.843 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57870_0 00:09:29.843 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:09:29.843 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57870_0 00:09:29.843 element at address: 0x2000199be940 with size: 20.255554 MiB 00:09:29.843 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:09:29.843 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:09:29.843 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:09:29.843 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:09:29.843 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57870_0 00:09:29.843 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:09:29.843 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57870 00:09:29.843 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:09:29.843 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57870 00:09:29.843 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:09:29.843 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:09:29.843 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:09:29.844 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:09:29.844 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:09:29.844 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:09:29.844 element at address: 0x200003efba40 with size: 1.008118 MiB 00:09:29.844 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:09:29.844 element at address: 0x200000cff180 with size: 1.000488 MiB 00:09:29.844 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57870 00:09:29.844 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:09:29.844 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57870 00:09:29.844 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:09:29.844 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57870 00:09:29.844 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:09:29.844 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57870 00:09:29.844 element at address: 0x20000087f740 with size: 0.500488 MiB 00:09:29.844 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57870 00:09:29.844 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:09:29.844 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57870 00:09:29.844 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:09:29.844 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:09:29.844 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:09:29.844 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:09:29.844 element at address: 0x20001987c540 with size: 0.250488 MiB 00:09:29.844 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:09:29.844 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:09:29.844 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57870 00:09:29.844 element at address: 0x20000085e640 with size: 0.125488 MiB 00:09:29.844 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57870 00:09:29.844 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:09:29.844 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:09:29.844 element at address: 0x200028265a40 with size: 0.023743 MiB 00:09:29.844 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:09:29.844 element at address: 0x20000085a380 with size: 0.016113 MiB 00:09:29.844 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57870 00:09:29.844 element at address: 0x20002826bb80 with size: 0.002441 MiB 00:09:29.844 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:09:29.844 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:09:29.844 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57870 00:09:29.844 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:09:29.844 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57870 00:09:29.844 element at address: 0x20000085a180 with size: 0.000305 MiB 00:09:29.844 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57870 00:09:29.844 element at address: 0x20002826c640 with size: 0.000305 MiB 00:09:29.844 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:09:29.844 09:20:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:09:29.844 09:20:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57870 00:09:29.844 09:20:07 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57870 ']' 00:09:29.844 09:20:07 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57870 00:09:29.844 09:20:07 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:09:29.844 09:20:07 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:29.844 09:20:07 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57870 00:09:29.844 09:20:07 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:29.844 09:20:07 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:29.844 killing process with pid 57870 00:09:29.844 09:20:07 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57870' 00:09:29.844 09:20:07 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57870 00:09:29.844 09:20:07 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57870 00:09:30.410 00:09:30.410 real 0m1.675s 00:09:30.410 user 0m1.676s 00:09:30.410 sys 0m0.490s 00:09:30.410 09:20:07 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.410 09:20:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:30.410 ************************************ 00:09:30.410 END TEST dpdk_mem_utility 00:09:30.410 ************************************ 00:09:30.410 09:20:07 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:30.410 09:20:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:30.410 09:20:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.410 09:20:07 -- common/autotest_common.sh@10 -- # set +x 00:09:30.410 ************************************ 00:09:30.410 START TEST event 00:09:30.410 ************************************ 00:09:30.410 09:20:07 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:30.410 * Looking for test storage... 00:09:30.410 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:30.410 09:20:08 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:30.410 09:20:08 event -- common/autotest_common.sh@1711 -- # lcov --version 00:09:30.410 09:20:08 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:30.410 09:20:08 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:30.410 09:20:08 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:30.410 09:20:08 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:30.410 09:20:08 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:30.410 09:20:08 event -- scripts/common.sh@336 -- # IFS=.-: 00:09:30.410 09:20:08 event -- scripts/common.sh@336 -- # read -ra ver1 00:09:30.410 09:20:08 event -- scripts/common.sh@337 -- # IFS=.-: 00:09:30.410 09:20:08 event -- scripts/common.sh@337 -- # read -ra ver2 00:09:30.410 09:20:08 event -- scripts/common.sh@338 -- # local 'op=<' 00:09:30.410 09:20:08 event -- scripts/common.sh@340 -- # ver1_l=2 00:09:30.410 09:20:08 event -- scripts/common.sh@341 -- # ver2_l=1 00:09:30.410 09:20:08 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:30.410 09:20:08 event -- scripts/common.sh@344 -- # case "$op" in 00:09:30.410 09:20:08 event -- scripts/common.sh@345 -- # : 1 00:09:30.410 09:20:08 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:30.410 09:20:08 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:30.410 09:20:08 event -- scripts/common.sh@365 -- # decimal 1 00:09:30.410 09:20:08 event -- scripts/common.sh@353 -- # local d=1 00:09:30.410 09:20:08 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:30.410 09:20:08 event -- scripts/common.sh@355 -- # echo 1 00:09:30.410 09:20:08 event -- scripts/common.sh@365 -- # ver1[v]=1 00:09:30.410 09:20:08 event -- scripts/common.sh@366 -- # decimal 2 00:09:30.410 09:20:08 event -- scripts/common.sh@353 -- # local d=2 00:09:30.410 09:20:08 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:30.410 09:20:08 event -- scripts/common.sh@355 -- # echo 2 00:09:30.410 09:20:08 event -- scripts/common.sh@366 -- # ver2[v]=2 00:09:30.410 09:20:08 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:30.410 09:20:08 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:30.410 09:20:08 event -- scripts/common.sh@368 -- # return 0 00:09:30.410 09:20:08 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:30.668 09:20:08 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:30.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.668 --rc genhtml_branch_coverage=1 00:09:30.668 --rc genhtml_function_coverage=1 00:09:30.668 --rc genhtml_legend=1 00:09:30.668 --rc geninfo_all_blocks=1 00:09:30.668 --rc geninfo_unexecuted_blocks=1 00:09:30.668 00:09:30.668 ' 00:09:30.668 09:20:08 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:30.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.668 --rc genhtml_branch_coverage=1 00:09:30.668 --rc genhtml_function_coverage=1 00:09:30.668 --rc genhtml_legend=1 00:09:30.668 --rc geninfo_all_blocks=1 00:09:30.668 --rc geninfo_unexecuted_blocks=1 00:09:30.668 00:09:30.668 ' 00:09:30.668 09:20:08 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:30.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.668 --rc genhtml_branch_coverage=1 00:09:30.668 --rc genhtml_function_coverage=1 00:09:30.668 --rc genhtml_legend=1 00:09:30.668 --rc geninfo_all_blocks=1 00:09:30.668 --rc geninfo_unexecuted_blocks=1 00:09:30.668 00:09:30.668 ' 00:09:30.668 09:20:08 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:30.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.668 --rc genhtml_branch_coverage=1 00:09:30.668 --rc genhtml_function_coverage=1 00:09:30.668 --rc genhtml_legend=1 00:09:30.668 --rc geninfo_all_blocks=1 00:09:30.668 --rc geninfo_unexecuted_blocks=1 00:09:30.668 00:09:30.668 ' 00:09:30.668 09:20:08 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:30.668 09:20:08 event -- bdev/nbd_common.sh@6 -- # set -e 00:09:30.668 09:20:08 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:30.668 09:20:08 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:09:30.668 09:20:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.668 09:20:08 event -- common/autotest_common.sh@10 -- # set +x 00:09:30.668 ************************************ 00:09:30.668 START TEST event_perf 00:09:30.668 ************************************ 00:09:30.668 09:20:08 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:30.668 Running I/O for 1 seconds...[2024-12-09 09:20:08.172324] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:09:30.668 [2024-12-09 09:20:08.172414] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57955 ] 00:09:30.668 [2024-12-09 09:20:08.323863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:30.668 [2024-12-09 09:20:08.383648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:30.668 [2024-12-09 09:20:08.383778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:30.668 [2024-12-09 09:20:08.383889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.668 [2024-12-09 09:20:08.383893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:32.047 Running I/O for 1 seconds... 00:09:32.047 lcore 0: 188489 00:09:32.047 lcore 1: 188487 00:09:32.047 lcore 2: 188488 00:09:32.047 lcore 3: 188487 00:09:32.047 done. 00:09:32.047 ************************************ 00:09:32.047 END TEST event_perf 00:09:32.047 ************************************ 00:09:32.047 00:09:32.047 real 0m1.281s 00:09:32.047 user 0m4.106s 00:09:32.047 sys 0m0.051s 00:09:32.047 09:20:09 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.047 09:20:09 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:09:32.047 09:20:09 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:32.047 09:20:09 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:32.047 09:20:09 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:32.047 09:20:09 event -- common/autotest_common.sh@10 -- # set +x 00:09:32.047 ************************************ 00:09:32.047 START TEST event_reactor 00:09:32.047 ************************************ 00:09:32.047 09:20:09 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:32.047 [2024-12-09 09:20:09.519052] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:09:32.047 [2024-12-09 09:20:09.519178] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57988 ] 00:09:32.047 [2024-12-09 09:20:09.671546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.047 [2024-12-09 09:20:09.728400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.424 test_start 00:09:33.424 oneshot 00:09:33.424 tick 100 00:09:33.424 tick 100 00:09:33.424 tick 250 00:09:33.424 tick 100 00:09:33.424 tick 100 00:09:33.424 tick 100 00:09:33.424 tick 250 00:09:33.424 tick 500 00:09:33.424 tick 100 00:09:33.424 tick 100 00:09:33.424 tick 250 00:09:33.424 tick 100 00:09:33.424 tick 100 00:09:33.424 test_end 00:09:33.424 ************************************ 00:09:33.424 END TEST event_reactor 00:09:33.424 ************************************ 00:09:33.424 00:09:33.424 real 0m1.278s 00:09:33.424 user 0m1.120s 00:09:33.424 sys 0m0.052s 00:09:33.424 09:20:10 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.424 09:20:10 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:09:33.424 09:20:10 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:33.424 09:20:10 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:33.424 09:20:10 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.424 09:20:10 event -- common/autotest_common.sh@10 -- # set +x 00:09:33.424 ************************************ 00:09:33.424 START TEST event_reactor_perf 00:09:33.424 ************************************ 00:09:33.424 09:20:10 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:33.424 [2024-12-09 09:20:10.875122] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:09:33.424 [2024-12-09 09:20:10.875487] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58029 ] 00:09:33.424 [2024-12-09 09:20:11.030126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.424 [2024-12-09 09:20:11.076798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.804 test_start 00:09:34.804 test_end 00:09:34.804 Performance: 483744 events per second 00:09:34.804 ************************************ 00:09:34.804 END TEST event_reactor_perf 00:09:34.804 ************************************ 00:09:34.804 00:09:34.804 real 0m1.268s 00:09:34.804 user 0m1.110s 00:09:34.804 sys 0m0.050s 00:09:34.804 09:20:12 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.804 09:20:12 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:09:34.804 09:20:12 event -- event/event.sh@49 -- # uname -s 00:09:34.804 09:20:12 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:09:34.804 09:20:12 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:34.804 09:20:12 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:34.804 09:20:12 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.804 09:20:12 event -- common/autotest_common.sh@10 -- # set +x 00:09:34.804 ************************************ 00:09:34.804 START TEST event_scheduler 00:09:34.804 ************************************ 00:09:34.804 09:20:12 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:34.804 * Looking for test storage... 00:09:34.804 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:09:34.804 09:20:12 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:34.804 09:20:12 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:09:34.804 09:20:12 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:34.804 09:20:12 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:34.804 09:20:12 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:34.804 09:20:12 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:34.804 09:20:12 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:34.804 09:20:12 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:09:34.804 09:20:12 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:09:34.804 09:20:12 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:09:34.804 09:20:12 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:09:34.804 09:20:12 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:09:34.804 09:20:12 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:09:34.804 09:20:12 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:09:34.804 09:20:12 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:34.804 09:20:12 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:09:34.804 09:20:12 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:09:34.804 09:20:12 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:34.804 09:20:12 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:34.804 09:20:12 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:09:34.804 09:20:12 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:09:34.804 09:20:12 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:34.804 09:20:12 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:09:34.804 09:20:12 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:09:34.804 09:20:12 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:09:34.804 09:20:12 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:09:34.804 09:20:12 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:34.804 09:20:12 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:09:34.804 09:20:12 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:09:34.804 09:20:12 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:34.804 09:20:12 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:34.804 09:20:12 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:09:34.804 09:20:12 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:34.804 09:20:12 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:34.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.804 --rc genhtml_branch_coverage=1 00:09:34.804 --rc genhtml_function_coverage=1 00:09:34.804 --rc genhtml_legend=1 00:09:34.804 --rc geninfo_all_blocks=1 00:09:34.804 --rc geninfo_unexecuted_blocks=1 00:09:34.804 00:09:34.804 ' 00:09:34.804 09:20:12 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:34.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.804 --rc genhtml_branch_coverage=1 00:09:34.804 --rc genhtml_function_coverage=1 00:09:34.804 --rc genhtml_legend=1 00:09:34.804 --rc geninfo_all_blocks=1 00:09:34.804 --rc geninfo_unexecuted_blocks=1 00:09:34.804 00:09:34.804 ' 00:09:34.804 09:20:12 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:34.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.804 --rc genhtml_branch_coverage=1 00:09:34.804 --rc genhtml_function_coverage=1 00:09:34.804 --rc genhtml_legend=1 00:09:34.804 --rc geninfo_all_blocks=1 00:09:34.804 --rc geninfo_unexecuted_blocks=1 00:09:34.804 00:09:34.804 ' 00:09:34.804 09:20:12 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:34.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.804 --rc genhtml_branch_coverage=1 00:09:34.804 --rc genhtml_function_coverage=1 00:09:34.804 --rc genhtml_legend=1 00:09:34.804 --rc geninfo_all_blocks=1 00:09:34.804 --rc geninfo_unexecuted_blocks=1 00:09:34.804 00:09:34.804 ' 00:09:34.804 09:20:12 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:09:34.804 09:20:12 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58093 00:09:34.804 09:20:12 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:09:34.804 09:20:12 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:09:34.804 09:20:12 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58093 00:09:34.804 09:20:12 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58093 ']' 00:09:34.804 09:20:12 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.804 09:20:12 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:34.804 09:20:12 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.804 09:20:12 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:34.804 09:20:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:34.804 [2024-12-09 09:20:12.485936] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:09:34.804 [2024-12-09 09:20:12.486136] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58093 ] 00:09:35.064 [2024-12-09 09:20:12.624853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:35.064 [2024-12-09 09:20:12.693620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.064 [2024-12-09 09:20:12.693807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:35.064 [2024-12-09 09:20:12.693903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:35.064 [2024-12-09 09:20:12.693908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:35.653 09:20:13 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:35.653 09:20:13 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:09:35.653 09:20:13 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:09:35.653 09:20:13 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.653 09:20:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:35.653 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:35.653 POWER: Cannot set governor of lcore 0 to userspace 00:09:35.653 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:35.653 POWER: Cannot set governor of lcore 0 to performance 00:09:35.653 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:35.653 POWER: Cannot set governor of lcore 0 to userspace 00:09:35.653 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:35.653 POWER: Cannot set governor of lcore 0 to userspace 00:09:35.653 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:09:35.653 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:09:35.653 POWER: Unable to set Power Management Environment for lcore 0 00:09:35.653 [2024-12-09 09:20:13.376040] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:09:35.653 [2024-12-09 09:20:13.376164] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:09:35.653 [2024-12-09 09:20:13.376248] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:09:35.653 [2024-12-09 09:20:13.376370] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:09:35.653 [2024-12-09 09:20:13.376503] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:09:35.653 [2024-12-09 09:20:13.376627] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:09:35.913 09:20:13 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.913 09:20:13 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:09:35.913 09:20:13 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.913 09:20:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:35.913 [2024-12-09 09:20:13.427893] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:35.913 [2024-12-09 09:20:13.457831] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:09:35.913 09:20:13 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.913 09:20:13 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:09:35.913 09:20:13 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:35.913 09:20:13 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.913 09:20:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:35.913 ************************************ 00:09:35.913 START TEST scheduler_create_thread 00:09:35.913 ************************************ 00:09:35.913 09:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:09:35.913 09:20:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:09:35.913 09:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.913 09:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:35.913 2 00:09:35.913 09:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.913 09:20:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:09:35.913 09:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.913 09:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:35.913 3 00:09:35.913 09:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.913 09:20:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:09:35.913 09:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.913 09:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:35.913 4 00:09:35.913 09:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.913 09:20:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:09:35.913 09:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.913 09:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:35.913 5 00:09:35.913 09:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.913 09:20:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:09:35.913 09:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.913 09:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:35.913 6 00:09:35.913 09:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.913 09:20:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:09:35.913 09:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.913 09:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:35.913 7 00:09:35.913 09:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.913 09:20:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:09:35.913 09:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.913 09:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:35.913 8 00:09:35.913 09:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.913 09:20:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:09:35.913 09:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.913 09:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:35.913 9 00:09:35.913 09:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.913 09:20:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:09:35.913 09:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.913 09:20:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:36.481 10 00:09:36.481 09:20:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.481 09:20:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:09:36.481 09:20:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.481 09:20:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:37.859 09:20:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.859 09:20:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:09:37.859 09:20:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:09:37.859 09:20:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.859 09:20:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:38.796 09:20:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.796 09:20:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:09:38.796 09:20:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.796 09:20:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:39.426 09:20:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.426 09:20:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:09:39.426 09:20:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:09:39.426 09:20:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.426 09:20:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:39.993 09:20:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.993 ************************************ 00:09:39.993 END TEST scheduler_create_thread 00:09:39.993 ************************************ 00:09:39.993 00:09:39.993 real 0m4.211s 00:09:39.993 user 0m0.024s 00:09:39.993 sys 0m0.011s 00:09:39.993 09:20:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.993 09:20:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:40.251 09:20:17 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:09:40.251 09:20:17 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58093 00:09:40.251 09:20:17 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58093 ']' 00:09:40.251 09:20:17 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58093 00:09:40.251 09:20:17 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:09:40.251 09:20:17 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:40.251 09:20:17 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58093 00:09:40.251 killing process with pid 58093 00:09:40.251 09:20:17 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:09:40.251 09:20:17 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:09:40.251 09:20:17 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58093' 00:09:40.251 09:20:17 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58093 00:09:40.251 09:20:17 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58093 00:09:40.252 [2024-12-09 09:20:17.962888] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:09:40.510 00:09:40.510 real 0m5.995s 00:09:40.510 user 0m13.112s 00:09:40.510 sys 0m0.434s 00:09:40.510 09:20:18 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:40.510 09:20:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:40.510 ************************************ 00:09:40.510 END TEST event_scheduler 00:09:40.510 ************************************ 00:09:40.768 09:20:18 event -- event/event.sh@51 -- # modprobe -n nbd 00:09:40.768 09:20:18 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:09:40.768 09:20:18 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:40.768 09:20:18 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:40.768 09:20:18 event -- common/autotest_common.sh@10 -- # set +x 00:09:40.768 ************************************ 00:09:40.768 START TEST app_repeat 00:09:40.768 ************************************ 00:09:40.768 09:20:18 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:09:40.768 09:20:18 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:40.768 09:20:18 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:40.768 09:20:18 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:09:40.768 09:20:18 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:40.768 09:20:18 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:09:40.768 09:20:18 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:09:40.768 09:20:18 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:09:40.768 09:20:18 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58209 00:09:40.768 09:20:18 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:09:40.768 Process app_repeat pid: 58209 00:09:40.768 09:20:18 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:09:40.768 09:20:18 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58209' 00:09:40.768 09:20:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:40.768 spdk_app_start Round 0 00:09:40.768 09:20:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:09:40.768 09:20:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58209 /var/tmp/spdk-nbd.sock 00:09:40.768 09:20:18 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58209 ']' 00:09:40.768 09:20:18 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:40.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:40.768 09:20:18 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:40.768 09:20:18 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:40.768 09:20:18 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:40.768 09:20:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:40.768 [2024-12-09 09:20:18.308908] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:09:40.768 [2024-12-09 09:20:18.309015] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58209 ] 00:09:40.768 [2024-12-09 09:20:18.468857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:41.026 [2024-12-09 09:20:18.520169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:41.026 [2024-12-09 09:20:18.520172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.026 [2024-12-09 09:20:18.562981] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:41.612 09:20:19 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:41.612 09:20:19 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:41.612 09:20:19 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:41.871 Malloc0 00:09:41.871 09:20:19 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:42.131 Malloc1 00:09:42.131 09:20:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:42.131 09:20:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:42.131 09:20:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:42.131 09:20:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:42.131 09:20:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:42.131 09:20:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:42.131 09:20:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:42.131 09:20:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:42.131 09:20:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:42.131 09:20:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:42.131 09:20:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:42.131 09:20:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:42.131 09:20:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:42.131 09:20:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:42.131 09:20:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:42.131 09:20:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:42.390 /dev/nbd0 00:09:42.390 09:20:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:42.390 09:20:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:42.390 09:20:19 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:42.390 09:20:19 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:42.390 09:20:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:42.390 09:20:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:42.390 09:20:19 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:42.390 09:20:19 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:42.390 09:20:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:42.390 09:20:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:42.390 09:20:19 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:42.390 1+0 records in 00:09:42.390 1+0 records out 00:09:42.390 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000356168 s, 11.5 MB/s 00:09:42.390 09:20:19 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:42.390 09:20:19 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:42.390 09:20:19 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:42.390 09:20:19 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:42.390 09:20:19 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:42.390 09:20:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:42.390 09:20:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:42.390 09:20:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:42.650 /dev/nbd1 00:09:42.650 09:20:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:42.650 09:20:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:42.650 09:20:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:42.650 09:20:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:42.650 09:20:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:42.650 09:20:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:42.650 09:20:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:42.650 09:20:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:42.650 09:20:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:42.650 09:20:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:42.650 09:20:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:42.650 1+0 records in 00:09:42.650 1+0 records out 00:09:42.650 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000332916 s, 12.3 MB/s 00:09:42.650 09:20:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:42.650 09:20:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:42.650 09:20:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:42.650 09:20:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:42.650 09:20:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:42.650 09:20:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:42.650 09:20:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:42.650 09:20:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:42.650 09:20:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:42.650 09:20:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:42.910 09:20:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:42.910 { 00:09:42.910 "nbd_device": "/dev/nbd0", 00:09:42.910 "bdev_name": "Malloc0" 00:09:42.910 }, 00:09:42.910 { 00:09:42.910 "nbd_device": "/dev/nbd1", 00:09:42.910 "bdev_name": "Malloc1" 00:09:42.910 } 00:09:42.910 ]' 00:09:42.910 09:20:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:42.910 { 00:09:42.910 "nbd_device": "/dev/nbd0", 00:09:42.910 "bdev_name": "Malloc0" 00:09:42.910 }, 00:09:42.910 { 00:09:42.910 "nbd_device": "/dev/nbd1", 00:09:42.910 "bdev_name": "Malloc1" 00:09:42.910 } 00:09:42.910 ]' 00:09:42.910 09:20:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:42.910 09:20:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:42.910 /dev/nbd1' 00:09:42.910 09:20:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:42.910 09:20:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:42.910 /dev/nbd1' 00:09:42.910 09:20:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:42.910 09:20:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:42.910 09:20:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:42.910 09:20:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:42.910 09:20:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:42.910 09:20:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:42.910 09:20:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:42.910 09:20:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:42.910 09:20:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:42.910 09:20:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:42.910 09:20:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:42.910 256+0 records in 00:09:42.910 256+0 records out 00:09:42.910 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119965 s, 87.4 MB/s 00:09:42.910 09:20:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:42.910 09:20:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:42.910 256+0 records in 00:09:42.910 256+0 records out 00:09:42.910 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0262646 s, 39.9 MB/s 00:09:42.910 09:20:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:42.910 09:20:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:42.910 256+0 records in 00:09:42.910 256+0 records out 00:09:42.910 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0278155 s, 37.7 MB/s 00:09:42.910 09:20:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:42.910 09:20:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:42.910 09:20:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:42.910 09:20:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:42.910 09:20:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:42.910 09:20:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:42.910 09:20:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:42.910 09:20:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:42.910 09:20:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:42.910 09:20:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:42.910 09:20:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:42.910 09:20:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:42.910 09:20:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:42.910 09:20:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:42.910 09:20:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:42.910 09:20:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:42.910 09:20:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:42.910 09:20:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:42.910 09:20:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:43.169 09:20:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:43.169 09:20:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:43.169 09:20:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:43.169 09:20:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:43.169 09:20:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:43.169 09:20:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:43.169 09:20:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:43.169 09:20:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:43.169 09:20:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:43.169 09:20:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:43.428 09:20:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:43.428 09:20:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:43.428 09:20:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:43.428 09:20:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:43.428 09:20:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:43.428 09:20:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:43.428 09:20:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:43.428 09:20:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:43.428 09:20:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:43.428 09:20:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:43.428 09:20:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:43.686 09:20:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:43.686 09:20:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:43.686 09:20:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:43.686 09:20:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:43.686 09:20:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:43.686 09:20:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:43.686 09:20:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:43.686 09:20:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:43.686 09:20:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:43.686 09:20:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:43.686 09:20:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:43.687 09:20:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:43.687 09:20:21 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:43.945 09:20:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:44.204 [2024-12-09 09:20:21.674498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:44.204 [2024-12-09 09:20:21.713278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:44.204 [2024-12-09 09:20:21.713282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.204 [2024-12-09 09:20:21.755621] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:44.204 [2024-12-09 09:20:21.755698] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:44.204 [2024-12-09 09:20:21.755710] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:47.493 spdk_app_start Round 1 00:09:47.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:47.493 09:20:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:47.493 09:20:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:09:47.493 09:20:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58209 /var/tmp/spdk-nbd.sock 00:09:47.493 09:20:24 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58209 ']' 00:09:47.493 09:20:24 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:47.493 09:20:24 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:47.493 09:20:24 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:47.493 09:20:24 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:47.493 09:20:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:47.493 09:20:24 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:47.493 09:20:24 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:47.493 09:20:24 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:47.493 Malloc0 00:09:47.493 09:20:25 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:47.750 Malloc1 00:09:47.750 09:20:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:47.750 09:20:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:47.750 09:20:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:47.750 09:20:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:47.750 09:20:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:47.750 09:20:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:47.750 09:20:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:47.750 09:20:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:47.750 09:20:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:47.750 09:20:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:47.750 09:20:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:47.750 09:20:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:47.750 09:20:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:47.750 09:20:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:47.750 09:20:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:47.750 09:20:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:48.008 /dev/nbd0 00:09:48.008 09:20:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:48.008 09:20:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:48.008 09:20:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:48.008 09:20:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:48.008 09:20:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:48.008 09:20:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:48.008 09:20:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:48.008 09:20:25 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:48.008 09:20:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:48.008 09:20:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:48.008 09:20:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:48.008 1+0 records in 00:09:48.008 1+0 records out 00:09:48.008 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253665 s, 16.1 MB/s 00:09:48.008 09:20:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:48.008 09:20:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:48.008 09:20:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:48.008 09:20:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:48.008 09:20:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:48.008 09:20:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:48.008 09:20:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:48.008 09:20:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:48.267 /dev/nbd1 00:09:48.267 09:20:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:48.267 09:20:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:48.267 09:20:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:48.267 09:20:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:48.267 09:20:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:48.267 09:20:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:48.267 09:20:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:48.267 09:20:25 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:48.267 09:20:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:48.267 09:20:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:48.267 09:20:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:48.267 1+0 records in 00:09:48.267 1+0 records out 00:09:48.267 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000343797 s, 11.9 MB/s 00:09:48.267 09:20:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:48.267 09:20:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:48.267 09:20:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:48.267 09:20:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:48.267 09:20:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:48.267 09:20:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:48.267 09:20:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:48.267 09:20:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:48.267 09:20:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:48.268 09:20:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:48.578 09:20:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:48.578 { 00:09:48.578 "nbd_device": "/dev/nbd0", 00:09:48.578 "bdev_name": "Malloc0" 00:09:48.578 }, 00:09:48.578 { 00:09:48.578 "nbd_device": "/dev/nbd1", 00:09:48.578 "bdev_name": "Malloc1" 00:09:48.578 } 00:09:48.578 ]' 00:09:48.578 09:20:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:48.578 { 00:09:48.578 "nbd_device": "/dev/nbd0", 00:09:48.578 "bdev_name": "Malloc0" 00:09:48.578 }, 00:09:48.578 { 00:09:48.578 "nbd_device": "/dev/nbd1", 00:09:48.578 "bdev_name": "Malloc1" 00:09:48.578 } 00:09:48.578 ]' 00:09:48.578 09:20:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:48.578 09:20:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:48.578 /dev/nbd1' 00:09:48.578 09:20:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:48.578 09:20:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:48.578 /dev/nbd1' 00:09:48.578 09:20:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:48.578 09:20:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:48.578 09:20:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:48.578 09:20:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:48.578 09:20:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:48.578 09:20:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:48.578 09:20:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:48.578 09:20:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:48.578 09:20:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:48.578 09:20:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:48.578 09:20:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:48.578 256+0 records in 00:09:48.578 256+0 records out 00:09:48.578 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00524864 s, 200 MB/s 00:09:48.578 09:20:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:48.578 09:20:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:48.578 256+0 records in 00:09:48.578 256+0 records out 00:09:48.578 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0223491 s, 46.9 MB/s 00:09:48.578 09:20:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:48.578 09:20:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:48.578 256+0 records in 00:09:48.578 256+0 records out 00:09:48.578 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.030821 s, 34.0 MB/s 00:09:48.578 09:20:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:48.578 09:20:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:48.578 09:20:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:48.578 09:20:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:48.578 09:20:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:48.578 09:20:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:48.578 09:20:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:48.578 09:20:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:48.578 09:20:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:48.578 09:20:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:48.578 09:20:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:48.578 09:20:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:48.578 09:20:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:48.578 09:20:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:48.578 09:20:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:48.578 09:20:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:48.578 09:20:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:48.578 09:20:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:48.578 09:20:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:48.837 09:20:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:48.837 09:20:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:48.837 09:20:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:48.837 09:20:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:48.837 09:20:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:48.837 09:20:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:48.837 09:20:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:48.837 09:20:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:48.837 09:20:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:48.837 09:20:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:49.096 09:20:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:49.096 09:20:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:49.096 09:20:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:49.096 09:20:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:49.096 09:20:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:49.096 09:20:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:49.096 09:20:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:49.096 09:20:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:49.096 09:20:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:49.096 09:20:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:49.096 09:20:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:49.355 09:20:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:49.355 09:20:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:49.355 09:20:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:49.355 09:20:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:49.355 09:20:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:49.355 09:20:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:49.355 09:20:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:49.355 09:20:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:49.355 09:20:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:49.355 09:20:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:49.355 09:20:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:49.355 09:20:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:49.355 09:20:26 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:49.614 09:20:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:49.874 [2024-12-09 09:20:27.411114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:49.874 [2024-12-09 09:20:27.480604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:49.874 [2024-12-09 09:20:27.480609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.874 [2024-12-09 09:20:27.561079] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:49.874 [2024-12-09 09:20:27.561176] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:49.874 [2024-12-09 09:20:27.561187] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:53.162 spdk_app_start Round 2 00:09:53.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:53.162 09:20:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:53.162 09:20:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:53.162 09:20:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58209 /var/tmp/spdk-nbd.sock 00:09:53.162 09:20:30 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58209 ']' 00:09:53.162 09:20:30 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:53.162 09:20:30 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:53.163 09:20:30 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:53.163 09:20:30 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:53.163 09:20:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:53.163 09:20:30 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:53.163 09:20:30 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:53.163 09:20:30 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:53.163 Malloc0 00:09:53.163 09:20:30 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:53.163 Malloc1 00:09:53.163 09:20:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:53.163 09:20:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:53.163 09:20:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:53.163 09:20:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:53.163 09:20:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:53.163 09:20:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:53.163 09:20:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:53.163 09:20:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:53.163 09:20:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:53.163 09:20:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:53.163 09:20:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:53.163 09:20:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:53.163 09:20:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:53.163 09:20:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:53.163 09:20:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:53.163 09:20:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:53.421 /dev/nbd0 00:09:53.421 09:20:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:53.421 09:20:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:53.421 09:20:31 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:53.421 09:20:31 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:53.421 09:20:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:53.421 09:20:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:53.421 09:20:31 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:53.421 09:20:31 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:53.421 09:20:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:53.421 09:20:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:53.421 09:20:31 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:53.421 1+0 records in 00:09:53.421 1+0 records out 00:09:53.421 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269247 s, 15.2 MB/s 00:09:53.421 09:20:31 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:53.421 09:20:31 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:53.421 09:20:31 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:53.421 09:20:31 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:53.421 09:20:31 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:53.421 09:20:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:53.421 09:20:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:53.421 09:20:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:53.680 /dev/nbd1 00:09:53.680 09:20:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:53.680 09:20:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:53.680 09:20:31 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:53.680 09:20:31 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:53.680 09:20:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:53.680 09:20:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:53.680 09:20:31 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:53.680 09:20:31 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:53.680 09:20:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:53.680 09:20:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:53.680 09:20:31 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:53.680 1+0 records in 00:09:53.680 1+0 records out 00:09:53.680 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366753 s, 11.2 MB/s 00:09:53.680 09:20:31 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:53.680 09:20:31 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:53.680 09:20:31 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:53.680 09:20:31 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:53.680 09:20:31 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:53.680 09:20:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:53.680 09:20:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:53.680 09:20:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:53.680 09:20:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:53.680 09:20:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:53.940 09:20:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:53.940 { 00:09:53.940 "nbd_device": "/dev/nbd0", 00:09:53.940 "bdev_name": "Malloc0" 00:09:53.940 }, 00:09:53.940 { 00:09:53.940 "nbd_device": "/dev/nbd1", 00:09:53.940 "bdev_name": "Malloc1" 00:09:53.940 } 00:09:53.940 ]' 00:09:53.940 09:20:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:53.940 { 00:09:53.940 "nbd_device": "/dev/nbd0", 00:09:53.940 "bdev_name": "Malloc0" 00:09:53.940 }, 00:09:53.940 { 00:09:53.940 "nbd_device": "/dev/nbd1", 00:09:53.940 "bdev_name": "Malloc1" 00:09:53.940 } 00:09:53.940 ]' 00:09:53.940 09:20:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:53.940 09:20:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:53.940 /dev/nbd1' 00:09:53.940 09:20:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:53.940 /dev/nbd1' 00:09:53.940 09:20:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:53.940 09:20:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:53.940 09:20:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:53.940 09:20:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:53.940 09:20:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:53.940 09:20:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:53.940 09:20:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:53.940 09:20:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:53.940 09:20:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:53.940 09:20:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:53.940 09:20:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:53.940 09:20:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:53.940 256+0 records in 00:09:53.940 256+0 records out 00:09:53.940 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011873 s, 88.3 MB/s 00:09:53.940 09:20:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:53.940 09:20:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:53.940 256+0 records in 00:09:53.940 256+0 records out 00:09:53.940 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0261635 s, 40.1 MB/s 00:09:53.940 09:20:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:53.940 09:20:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:54.199 256+0 records in 00:09:54.199 256+0 records out 00:09:54.199 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0264935 s, 39.6 MB/s 00:09:54.199 09:20:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:54.199 09:20:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:54.199 09:20:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:54.199 09:20:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:54.199 09:20:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:54.199 09:20:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:54.199 09:20:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:54.199 09:20:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:54.199 09:20:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:54.199 09:20:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:54.199 09:20:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:54.199 09:20:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:54.199 09:20:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:54.199 09:20:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:54.199 09:20:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:54.199 09:20:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:54.199 09:20:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:54.199 09:20:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:54.199 09:20:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:54.457 09:20:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:54.457 09:20:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:54.457 09:20:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:54.457 09:20:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:54.457 09:20:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:54.457 09:20:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:54.457 09:20:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:54.457 09:20:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:54.457 09:20:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:54.457 09:20:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:54.763 09:20:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:54.763 09:20:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:54.763 09:20:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:54.763 09:20:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:54.763 09:20:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:54.763 09:20:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:54.763 09:20:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:54.763 09:20:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:54.763 09:20:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:54.763 09:20:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:54.763 09:20:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:55.022 09:20:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:55.022 09:20:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:55.022 09:20:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:55.022 09:20:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:55.022 09:20:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:55.022 09:20:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:55.022 09:20:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:55.022 09:20:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:55.022 09:20:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:55.022 09:20:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:55.022 09:20:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:55.022 09:20:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:55.022 09:20:32 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:55.340 09:20:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:55.340 [2024-12-09 09:20:32.994417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:55.340 [2024-12-09 09:20:33.047783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:55.340 [2024-12-09 09:20:33.047784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.598 [2024-12-09 09:20:33.090552] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:55.598 [2024-12-09 09:20:33.090632] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:55.598 [2024-12-09 09:20:33.090643] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:58.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:58.881 09:20:35 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58209 /var/tmp/spdk-nbd.sock 00:09:58.881 09:20:35 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58209 ']' 00:09:58.881 09:20:35 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:58.881 09:20:35 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:58.881 09:20:35 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:58.881 09:20:35 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:58.881 09:20:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:58.881 09:20:36 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:58.881 09:20:36 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:58.881 09:20:36 event.app_repeat -- event/event.sh@39 -- # killprocess 58209 00:09:58.881 09:20:36 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58209 ']' 00:09:58.881 09:20:36 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58209 00:09:58.881 09:20:36 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:09:58.881 09:20:36 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:58.881 09:20:36 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58209 00:09:58.881 killing process with pid 58209 00:09:58.881 09:20:36 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:58.881 09:20:36 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:58.881 09:20:36 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58209' 00:09:58.881 09:20:36 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58209 00:09:58.881 09:20:36 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58209 00:09:58.881 spdk_app_start is called in Round 0. 00:09:58.881 Shutdown signal received, stop current app iteration 00:09:58.881 Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 reinitialization... 00:09:58.881 spdk_app_start is called in Round 1. 00:09:58.881 Shutdown signal received, stop current app iteration 00:09:58.881 Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 reinitialization... 00:09:58.881 spdk_app_start is called in Round 2. 00:09:58.881 Shutdown signal received, stop current app iteration 00:09:58.881 Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 reinitialization... 00:09:58.881 spdk_app_start is called in Round 3. 00:09:58.881 Shutdown signal received, stop current app iteration 00:09:58.881 09:20:36 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:58.881 09:20:36 event.app_repeat -- event/event.sh@42 -- # return 0 00:09:58.881 00:09:58.881 real 0m17.999s 00:09:58.881 user 0m39.607s 00:09:58.881 sys 0m3.175s 00:09:58.881 09:20:36 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.881 ************************************ 00:09:58.881 END TEST app_repeat 00:09:58.881 ************************************ 00:09:58.881 09:20:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:58.881 09:20:36 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:58.881 09:20:36 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:58.881 09:20:36 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:58.881 09:20:36 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.881 09:20:36 event -- common/autotest_common.sh@10 -- # set +x 00:09:58.881 ************************************ 00:09:58.881 START TEST cpu_locks 00:09:58.881 ************************************ 00:09:58.881 09:20:36 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:58.881 * Looking for test storage... 00:09:58.881 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:58.881 09:20:36 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:58.881 09:20:36 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:09:58.881 09:20:36 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:58.881 09:20:36 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:58.881 09:20:36 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:58.881 09:20:36 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:58.881 09:20:36 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:58.881 09:20:36 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:09:58.881 09:20:36 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:09:58.881 09:20:36 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:09:58.881 09:20:36 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:09:58.881 09:20:36 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:09:58.881 09:20:36 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:09:58.881 09:20:36 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:09:58.881 09:20:36 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:58.881 09:20:36 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:09:58.881 09:20:36 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:09:58.881 09:20:36 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:58.881 09:20:36 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:58.881 09:20:36 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:09:58.881 09:20:36 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:09:58.881 09:20:36 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:58.881 09:20:36 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:09:58.881 09:20:36 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:09:58.881 09:20:36 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:09:58.881 09:20:36 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:09:58.881 09:20:36 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:58.881 09:20:36 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:09:58.881 09:20:36 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:09:58.881 09:20:36 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:58.881 09:20:36 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:58.881 09:20:36 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:09:58.881 09:20:36 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:58.881 09:20:36 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:58.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.881 --rc genhtml_branch_coverage=1 00:09:58.881 --rc genhtml_function_coverage=1 00:09:58.881 --rc genhtml_legend=1 00:09:58.881 --rc geninfo_all_blocks=1 00:09:58.881 --rc geninfo_unexecuted_blocks=1 00:09:58.881 00:09:58.881 ' 00:09:58.881 09:20:36 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:58.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.881 --rc genhtml_branch_coverage=1 00:09:58.881 --rc genhtml_function_coverage=1 00:09:58.881 --rc genhtml_legend=1 00:09:58.881 --rc geninfo_all_blocks=1 00:09:58.881 --rc geninfo_unexecuted_blocks=1 00:09:58.881 00:09:58.881 ' 00:09:58.881 09:20:36 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:58.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.881 --rc genhtml_branch_coverage=1 00:09:58.881 --rc genhtml_function_coverage=1 00:09:58.881 --rc genhtml_legend=1 00:09:58.881 --rc geninfo_all_blocks=1 00:09:58.881 --rc geninfo_unexecuted_blocks=1 00:09:58.881 00:09:58.881 ' 00:09:58.881 09:20:36 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:58.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.881 --rc genhtml_branch_coverage=1 00:09:58.881 --rc genhtml_function_coverage=1 00:09:58.881 --rc genhtml_legend=1 00:09:58.881 --rc geninfo_all_blocks=1 00:09:58.881 --rc geninfo_unexecuted_blocks=1 00:09:58.881 00:09:58.881 ' 00:09:58.881 09:20:36 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:58.881 09:20:36 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:58.881 09:20:36 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:58.881 09:20:36 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:58.881 09:20:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:58.881 09:20:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.881 09:20:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:58.881 ************************************ 00:09:58.881 START TEST default_locks 00:09:58.881 ************************************ 00:09:58.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.882 09:20:36 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:09:58.882 09:20:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58639 00:09:58.882 09:20:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:58.882 09:20:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58639 00:09:58.882 09:20:36 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58639 ']' 00:09:58.882 09:20:36 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.882 09:20:36 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:58.882 09:20:36 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.882 09:20:36 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:58.882 09:20:36 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:59.140 [2024-12-09 09:20:36.653135] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:09:59.140 [2024-12-09 09:20:36.653346] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58639 ] 00:09:59.140 [2024-12-09 09:20:36.803214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.140 [2024-12-09 09:20:36.850597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.398 [2024-12-09 09:20:36.908206] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:59.964 09:20:37 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:59.964 09:20:37 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:09:59.964 09:20:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58639 00:09:59.964 09:20:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:59.964 09:20:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58639 00:10:00.224 09:20:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58639 00:10:00.224 09:20:37 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58639 ']' 00:10:00.224 09:20:37 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58639 00:10:00.224 09:20:37 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:10:00.224 09:20:37 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:00.224 09:20:37 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58639 00:10:00.516 killing process with pid 58639 00:10:00.516 09:20:37 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:00.516 09:20:37 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:00.516 09:20:37 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58639' 00:10:00.516 09:20:37 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58639 00:10:00.516 09:20:37 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58639 00:10:00.775 09:20:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58639 00:10:00.775 09:20:38 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:10:00.775 09:20:38 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58639 00:10:00.775 09:20:38 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:10:00.775 09:20:38 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:00.775 09:20:38 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:10:00.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.775 ERROR: process (pid: 58639) is no longer running 00:10:00.775 09:20:38 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:00.775 09:20:38 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58639 00:10:00.775 09:20:38 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58639 ']' 00:10:00.775 09:20:38 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.775 09:20:38 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:00.775 09:20:38 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.775 09:20:38 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:00.775 09:20:38 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:00.775 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58639) - No such process 00:10:00.775 09:20:38 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:00.775 09:20:38 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:10:00.775 09:20:38 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:10:00.775 09:20:38 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:00.775 09:20:38 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:00.775 09:20:38 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:00.775 09:20:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:10:00.775 09:20:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:00.775 09:20:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:10:00.775 09:20:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:00.775 00:10:00.775 real 0m1.708s 00:10:00.775 user 0m1.799s 00:10:00.775 sys 0m0.526s 00:10:00.775 09:20:38 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:00.775 ************************************ 00:10:00.775 END TEST default_locks 00:10:00.775 ************************************ 00:10:00.775 09:20:38 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:00.775 09:20:38 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:10:00.775 09:20:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:00.775 09:20:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:00.775 09:20:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:00.775 ************************************ 00:10:00.775 START TEST default_locks_via_rpc 00:10:00.775 ************************************ 00:10:00.775 09:20:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:10:00.775 09:20:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58685 00:10:00.775 09:20:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:00.775 09:20:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58685 00:10:00.775 09:20:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58685 ']' 00:10:00.775 09:20:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.775 09:20:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:00.775 09:20:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.775 09:20:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:00.775 09:20:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:00.775 [2024-12-09 09:20:38.435211] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:00.775 [2024-12-09 09:20:38.435281] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58685 ] 00:10:01.035 [2024-12-09 09:20:38.586353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.035 [2024-12-09 09:20:38.637313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.035 [2024-12-09 09:20:38.693565] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:01.602 09:20:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:01.602 09:20:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:01.602 09:20:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:10:01.602 09:20:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.602 09:20:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:01.602 09:20:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.602 09:20:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:10:01.602 09:20:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:01.602 09:20:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:10:01.602 09:20:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:01.602 09:20:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:10:01.602 09:20:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.602 09:20:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:01.861 09:20:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.862 09:20:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58685 00:10:01.862 09:20:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58685 00:10:01.862 09:20:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:02.120 09:20:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58685 00:10:02.120 09:20:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58685 ']' 00:10:02.120 09:20:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58685 00:10:02.120 09:20:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:10:02.120 09:20:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:02.120 09:20:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58685 00:10:02.120 09:20:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:02.120 killing process with pid 58685 00:10:02.120 09:20:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:02.120 09:20:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58685' 00:10:02.120 09:20:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58685 00:10:02.121 09:20:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58685 00:10:02.378 ************************************ 00:10:02.378 END TEST default_locks_via_rpc 00:10:02.378 ************************************ 00:10:02.378 00:10:02.378 real 0m1.677s 00:10:02.378 user 0m1.801s 00:10:02.378 sys 0m0.490s 00:10:02.378 09:20:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:02.378 09:20:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.637 09:20:40 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:10:02.637 09:20:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:02.637 09:20:40 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:02.637 09:20:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:02.637 ************************************ 00:10:02.637 START TEST non_locking_app_on_locked_coremask 00:10:02.637 ************************************ 00:10:02.637 09:20:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:10:02.637 09:20:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58731 00:10:02.637 09:20:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:02.637 09:20:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58731 /var/tmp/spdk.sock 00:10:02.637 09:20:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58731 ']' 00:10:02.637 09:20:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:02.637 09:20:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:02.637 09:20:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:02.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:02.637 09:20:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:02.637 09:20:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:02.637 [2024-12-09 09:20:40.187528] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:02.637 [2024-12-09 09:20:40.187619] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58731 ] 00:10:02.637 [2024-12-09 09:20:40.327196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.895 [2024-12-09 09:20:40.375570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.895 [2024-12-09 09:20:40.430812] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:03.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:03.462 09:20:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:03.462 09:20:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:03.462 09:20:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:10:03.462 09:20:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58747 00:10:03.462 09:20:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58747 /var/tmp/spdk2.sock 00:10:03.462 09:20:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58747 ']' 00:10:03.462 09:20:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:03.462 09:20:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:03.462 09:20:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:03.462 09:20:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:03.462 09:20:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:03.462 [2024-12-09 09:20:41.122568] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:03.462 [2024-12-09 09:20:41.123008] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58747 ] 00:10:03.719 [2024-12-09 09:20:41.274112] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:03.719 [2024-12-09 09:20:41.274149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.719 [2024-12-09 09:20:41.371277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.978 [2024-12-09 09:20:41.480701] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:04.544 09:20:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:04.544 09:20:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:04.544 09:20:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58731 00:10:04.544 09:20:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58731 00:10:04.544 09:20:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:05.480 09:20:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58731 00:10:05.480 09:20:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58731 ']' 00:10:05.480 09:20:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58731 00:10:05.480 09:20:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:05.480 09:20:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:05.480 09:20:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58731 00:10:05.480 killing process with pid 58731 00:10:05.480 09:20:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:05.480 09:20:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:05.480 09:20:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58731' 00:10:05.480 09:20:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58731 00:10:05.480 09:20:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58731 00:10:06.048 09:20:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58747 00:10:06.048 09:20:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58747 ']' 00:10:06.048 09:20:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58747 00:10:06.048 09:20:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:06.049 09:20:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:06.049 09:20:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58747 00:10:06.049 09:20:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:06.049 09:20:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:06.049 09:20:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58747' 00:10:06.049 killing process with pid 58747 00:10:06.049 09:20:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58747 00:10:06.049 09:20:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58747 00:10:06.437 ************************************ 00:10:06.437 END TEST non_locking_app_on_locked_coremask 00:10:06.437 ************************************ 00:10:06.437 00:10:06.437 real 0m3.845s 00:10:06.437 user 0m4.224s 00:10:06.437 sys 0m1.064s 00:10:06.437 09:20:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:06.437 09:20:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:06.437 09:20:44 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:10:06.437 09:20:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:06.437 09:20:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:06.437 09:20:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:06.437 ************************************ 00:10:06.437 START TEST locking_app_on_unlocked_coremask 00:10:06.437 ************************************ 00:10:06.437 09:20:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:10:06.437 09:20:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:10:06.437 09:20:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58814 00:10:06.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.437 09:20:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58814 /var/tmp/spdk.sock 00:10:06.437 09:20:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58814 ']' 00:10:06.437 09:20:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.437 09:20:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:06.437 09:20:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.437 09:20:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:06.437 09:20:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:06.437 [2024-12-09 09:20:44.106344] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:06.437 [2024-12-09 09:20:44.106679] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58814 ] 00:10:06.696 [2024-12-09 09:20:44.255898] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:06.696 [2024-12-09 09:20:44.255954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.696 [2024-12-09 09:20:44.310403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.696 [2024-12-09 09:20:44.371419] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:07.645 09:20:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:07.645 09:20:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:07.645 09:20:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58830 00:10:07.645 09:20:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58830 /var/tmp/spdk2.sock 00:10:07.645 09:20:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:07.645 09:20:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58830 ']' 00:10:07.645 09:20:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:07.645 09:20:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:07.645 09:20:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:07.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:07.645 09:20:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:07.645 09:20:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:07.645 [2024-12-09 09:20:45.068591] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:07.645 [2024-12-09 09:20:45.068886] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58830 ] 00:10:07.645 [2024-12-09 09:20:45.222054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.645 [2024-12-09 09:20:45.329718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.906 [2024-12-09 09:20:45.452726] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:08.474 09:20:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:08.474 09:20:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:08.474 09:20:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58830 00:10:08.474 09:20:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58830 00:10:08.474 09:20:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:09.041 09:20:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58814 00:10:09.042 09:20:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58814 ']' 00:10:09.042 09:20:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58814 00:10:09.042 09:20:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:09.042 09:20:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:09.042 09:20:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58814 00:10:09.042 killing process with pid 58814 00:10:09.042 09:20:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:09.042 09:20:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:09.042 09:20:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58814' 00:10:09.042 09:20:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58814 00:10:09.042 09:20:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58814 00:10:09.609 09:20:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58830 00:10:09.610 09:20:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58830 ']' 00:10:09.610 09:20:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58830 00:10:09.610 09:20:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:09.610 09:20:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:09.610 09:20:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58830 00:10:09.868 killing process with pid 58830 00:10:09.868 09:20:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:09.868 09:20:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:09.868 09:20:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58830' 00:10:09.868 09:20:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58830 00:10:09.868 09:20:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58830 00:10:10.126 00:10:10.126 real 0m3.598s 00:10:10.126 user 0m3.993s 00:10:10.126 sys 0m1.014s 00:10:10.126 09:20:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:10.126 09:20:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:10.126 ************************************ 00:10:10.126 END TEST locking_app_on_unlocked_coremask 00:10:10.126 ************************************ 00:10:10.126 09:20:47 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:10:10.126 09:20:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:10.126 09:20:47 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:10.126 09:20:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:10.126 ************************************ 00:10:10.126 START TEST locking_app_on_locked_coremask 00:10:10.126 ************************************ 00:10:10.126 09:20:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:10:10.126 09:20:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=58891 00:10:10.126 09:20:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:10.126 09:20:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 58891 /var/tmp/spdk.sock 00:10:10.126 09:20:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58891 ']' 00:10:10.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.126 09:20:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.126 09:20:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:10.126 09:20:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.126 09:20:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:10.126 09:20:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:10.126 [2024-12-09 09:20:47.781145] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:10.127 [2024-12-09 09:20:47.781403] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58891 ] 00:10:10.385 [2024-12-09 09:20:47.926675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.385 [2024-12-09 09:20:47.978814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.385 [2024-12-09 09:20:48.034695] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:10.950 09:20:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:10.950 09:20:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:10.950 09:20:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=58902 00:10:10.950 09:20:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 58902 /var/tmp/spdk2.sock 00:10:10.950 09:20:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:10:10.950 09:20:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58902 /var/tmp/spdk2.sock 00:10:10.950 09:20:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:10:10.950 09:20:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:10.950 09:20:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:10.950 09:20:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:10:10.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:10.950 09:20:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:10.950 09:20:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 58902 /var/tmp/spdk2.sock 00:10:10.950 09:20:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58902 ']' 00:10:10.950 09:20:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:10.950 09:20:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:10.950 09:20:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:10.950 09:20:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:10.950 09:20:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:11.219 [2024-12-09 09:20:48.696948] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:11.219 [2024-12-09 09:20:48.697018] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58902 ] 00:10:11.219 [2024-12-09 09:20:48.845066] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 58891 has claimed it. 00:10:11.219 [2024-12-09 09:20:48.845126] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:11.823 ERROR: process (pid: 58902) is no longer running 00:10:11.823 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58902) - No such process 00:10:11.823 09:20:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:11.823 09:20:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:10:11.823 09:20:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:10:11.823 09:20:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:11.823 09:20:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:11.823 09:20:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:11.823 09:20:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 58891 00:10:11.823 09:20:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58891 00:10:11.823 09:20:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:12.390 09:20:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 58891 00:10:12.390 09:20:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58891 ']' 00:10:12.390 09:20:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58891 00:10:12.390 09:20:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:12.390 09:20:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:12.390 09:20:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58891 00:10:12.390 killing process with pid 58891 00:10:12.390 09:20:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:12.390 09:20:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:12.390 09:20:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58891' 00:10:12.390 09:20:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58891 00:10:12.390 09:20:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58891 00:10:12.649 00:10:12.649 real 0m2.522s 00:10:12.649 user 0m2.834s 00:10:12.649 sys 0m0.632s 00:10:12.649 09:20:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:12.649 ************************************ 00:10:12.649 09:20:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:12.649 END TEST locking_app_on_locked_coremask 00:10:12.649 ************************************ 00:10:12.649 09:20:50 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:10:12.649 09:20:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:12.650 09:20:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:12.650 09:20:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:12.650 ************************************ 00:10:12.650 START TEST locking_overlapped_coremask 00:10:12.650 ************************************ 00:10:12.650 09:20:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:10:12.650 09:20:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=58953 00:10:12.650 09:20:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:10:12.650 09:20:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 58953 /var/tmp/spdk.sock 00:10:12.650 09:20:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 58953 ']' 00:10:12.650 09:20:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.650 09:20:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:12.650 09:20:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.650 09:20:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:12.650 09:20:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:12.650 [2024-12-09 09:20:50.371192] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:12.908 [2024-12-09 09:20:50.371412] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58953 ] 00:10:12.908 [2024-12-09 09:20:50.522588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:12.908 [2024-12-09 09:20:50.574232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:12.908 [2024-12-09 09:20:50.574416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.908 [2024-12-09 09:20:50.574418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:13.166 [2024-12-09 09:20:50.635040] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:13.733 09:20:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:13.733 09:20:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:13.733 09:20:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=58971 00:10:13.733 09:20:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 58971 /var/tmp/spdk2.sock 00:10:13.733 09:20:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:10:13.733 09:20:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58971 /var/tmp/spdk2.sock 00:10:13.733 09:20:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:10:13.733 09:20:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:10:13.733 09:20:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:13.733 09:20:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:10:13.733 09:20:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:13.733 09:20:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 58971 /var/tmp/spdk2.sock 00:10:13.733 09:20:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 58971 ']' 00:10:13.733 09:20:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:13.733 09:20:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:13.733 09:20:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:13.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:13.733 09:20:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:13.733 09:20:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:13.733 [2024-12-09 09:20:51.326789] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:13.733 [2024-12-09 09:20:51.327373] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58971 ] 00:10:13.992 [2024-12-09 09:20:51.478008] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58953 has claimed it. 00:10:13.992 [2024-12-09 09:20:51.478064] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:14.560 ERROR: process (pid: 58971) is no longer running 00:10:14.560 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58971) - No such process 00:10:14.560 09:20:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:14.560 09:20:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:10:14.560 09:20:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:10:14.560 09:20:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:14.560 09:20:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:14.560 09:20:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:14.560 09:20:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:10:14.560 09:20:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:14.560 09:20:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:14.560 09:20:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:14.560 09:20:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 58953 00:10:14.560 09:20:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 58953 ']' 00:10:14.560 09:20:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 58953 00:10:14.560 09:20:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:10:14.560 09:20:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:14.560 09:20:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58953 00:10:14.560 09:20:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:14.560 09:20:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:14.560 09:20:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58953' 00:10:14.560 killing process with pid 58953 00:10:14.560 09:20:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 58953 00:10:14.560 09:20:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 58953 00:10:14.819 00:10:14.819 real 0m2.027s 00:10:14.819 user 0m5.695s 00:10:14.819 sys 0m0.390s 00:10:14.819 ************************************ 00:10:14.819 END TEST locking_overlapped_coremask 00:10:14.819 ************************************ 00:10:14.819 09:20:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:14.819 09:20:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:14.819 09:20:52 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:10:14.819 09:20:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:14.819 09:20:52 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:14.819 09:20:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:14.819 ************************************ 00:10:14.819 START TEST locking_overlapped_coremask_via_rpc 00:10:14.819 ************************************ 00:10:14.819 09:20:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:10:14.819 09:20:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59011 00:10:14.819 09:20:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59011 /var/tmp/spdk.sock 00:10:14.819 09:20:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:10:14.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.819 09:20:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59011 ']' 00:10:14.819 09:20:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.819 09:20:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:14.819 09:20:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.819 09:20:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:14.819 09:20:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:14.819 [2024-12-09 09:20:52.474593] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:14.820 [2024-12-09 09:20:52.474679] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59011 ] 00:10:15.078 [2024-12-09 09:20:52.627248] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:15.078 [2024-12-09 09:20:52.627293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:15.078 [2024-12-09 09:20:52.680824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:15.078 [2024-12-09 09:20:52.680956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:15.078 [2024-12-09 09:20:52.680961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.078 [2024-12-09 09:20:52.737184] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:15.647 09:20:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:15.647 09:20:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:15.647 09:20:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59029 00:10:15.647 09:20:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59029 /var/tmp/spdk2.sock 00:10:15.647 09:20:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:10:15.647 09:20:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59029 ']' 00:10:15.647 09:20:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:15.647 09:20:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:15.647 09:20:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:15.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:15.647 09:20:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:15.647 09:20:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:15.907 [2024-12-09 09:20:53.398134] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:15.907 [2024-12-09 09:20:53.398391] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59029 ] 00:10:15.907 [2024-12-09 09:20:53.546292] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:15.907 [2024-12-09 09:20:53.546336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:16.166 [2024-12-09 09:20:53.650070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:16.166 [2024-12-09 09:20:53.653621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:16.166 [2024-12-09 09:20:53.653625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:16.166 [2024-12-09 09:20:53.760071] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:16.737 09:20:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:16.737 09:20:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:16.737 09:20:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:10:16.737 09:20:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.737 09:20:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.737 09:20:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.737 09:20:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:16.737 09:20:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:10:16.737 09:20:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:16.737 09:20:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:16.737 09:20:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:16.737 09:20:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:16.737 09:20:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:16.737 09:20:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:16.737 09:20:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.737 09:20:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.737 [2024-12-09 09:20:54.298556] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59011 has claimed it. 00:10:16.737 request: 00:10:16.737 { 00:10:16.737 "method": "framework_enable_cpumask_locks", 00:10:16.737 "req_id": 1 00:10:16.738 } 00:10:16.738 Got JSON-RPC error response 00:10:16.738 response: 00:10:16.738 { 00:10:16.738 "code": -32603, 00:10:16.738 "message": "Failed to claim CPU core: 2" 00:10:16.738 } 00:10:16.738 09:20:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:16.738 09:20:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:10:16.738 09:20:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:16.738 09:20:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:16.738 09:20:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:16.738 09:20:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59011 /var/tmp/spdk.sock 00:10:16.738 09:20:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59011 ']' 00:10:16.738 09:20:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.738 09:20:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:16.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.738 09:20:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.738 09:20:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:16.738 09:20:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.997 09:20:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:16.997 09:20:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:16.997 09:20:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59029 /var/tmp/spdk2.sock 00:10:16.997 09:20:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59029 ']' 00:10:16.997 09:20:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:16.997 09:20:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:16.997 09:20:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:16.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:16.997 09:20:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:16.997 09:20:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:17.257 09:20:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:17.257 09:20:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:17.257 09:20:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:10:17.257 ************************************ 00:10:17.257 END TEST locking_overlapped_coremask_via_rpc 00:10:17.257 ************************************ 00:10:17.257 09:20:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:17.257 09:20:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:17.257 09:20:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:17.257 00:10:17.257 real 0m2.338s 00:10:17.257 user 0m1.055s 00:10:17.257 sys 0m0.203s 00:10:17.257 09:20:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:17.257 09:20:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:17.257 09:20:54 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:10:17.257 09:20:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59011 ]] 00:10:17.257 09:20:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59011 00:10:17.257 09:20:54 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59011 ']' 00:10:17.257 09:20:54 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59011 00:10:17.257 09:20:54 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:10:17.257 09:20:54 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:17.257 09:20:54 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59011 00:10:17.257 09:20:54 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:17.257 09:20:54 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:17.258 09:20:54 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59011' 00:10:17.258 killing process with pid 59011 00:10:17.258 09:20:54 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59011 00:10:17.258 09:20:54 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59011 00:10:17.530 09:20:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59029 ]] 00:10:17.530 09:20:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59029 00:10:17.530 09:20:55 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59029 ']' 00:10:17.530 09:20:55 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59029 00:10:17.530 09:20:55 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:10:17.530 09:20:55 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:17.530 09:20:55 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59029 00:10:17.530 killing process with pid 59029 00:10:17.530 09:20:55 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:10:17.530 09:20:55 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:10:17.530 09:20:55 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59029' 00:10:17.530 09:20:55 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59029 00:10:17.530 09:20:55 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59029 00:10:18.103 09:20:55 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:18.103 09:20:55 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:10:18.103 09:20:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59011 ]] 00:10:18.103 09:20:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59011 00:10:18.103 09:20:55 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59011 ']' 00:10:18.103 Process with pid 59011 is not found 00:10:18.103 09:20:55 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59011 00:10:18.103 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59011) - No such process 00:10:18.103 09:20:55 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59011 is not found' 00:10:18.103 09:20:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59029 ]] 00:10:18.103 09:20:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59029 00:10:18.103 09:20:55 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59029 ']' 00:10:18.103 09:20:55 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59029 00:10:18.103 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59029) - No such process 00:10:18.103 09:20:55 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59029 is not found' 00:10:18.103 Process with pid 59029 is not found 00:10:18.103 09:20:55 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:18.103 00:10:18.103 real 0m19.207s 00:10:18.103 user 0m32.873s 00:10:18.103 sys 0m5.293s 00:10:18.103 09:20:55 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:18.103 09:20:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:18.103 ************************************ 00:10:18.103 END TEST cpu_locks 00:10:18.103 ************************************ 00:10:18.103 ************************************ 00:10:18.103 END TEST event 00:10:18.103 ************************************ 00:10:18.103 00:10:18.103 real 0m47.697s 00:10:18.103 user 1m32.198s 00:10:18.103 sys 0m9.457s 00:10:18.103 09:20:55 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:18.103 09:20:55 event -- common/autotest_common.sh@10 -- # set +x 00:10:18.103 09:20:55 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:18.103 09:20:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:18.103 09:20:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:18.103 09:20:55 -- common/autotest_common.sh@10 -- # set +x 00:10:18.103 ************************************ 00:10:18.103 START TEST thread 00:10:18.103 ************************************ 00:10:18.103 09:20:55 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:18.103 * Looking for test storage... 00:10:18.103 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:10:18.103 09:20:55 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:18.103 09:20:55 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:10:18.103 09:20:55 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:18.384 09:20:55 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:18.384 09:20:55 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:18.384 09:20:55 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:18.384 09:20:55 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:18.384 09:20:55 thread -- scripts/common.sh@336 -- # IFS=.-: 00:10:18.384 09:20:55 thread -- scripts/common.sh@336 -- # read -ra ver1 00:10:18.384 09:20:55 thread -- scripts/common.sh@337 -- # IFS=.-: 00:10:18.384 09:20:55 thread -- scripts/common.sh@337 -- # read -ra ver2 00:10:18.384 09:20:55 thread -- scripts/common.sh@338 -- # local 'op=<' 00:10:18.384 09:20:55 thread -- scripts/common.sh@340 -- # ver1_l=2 00:10:18.384 09:20:55 thread -- scripts/common.sh@341 -- # ver2_l=1 00:10:18.384 09:20:55 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:18.384 09:20:55 thread -- scripts/common.sh@344 -- # case "$op" in 00:10:18.384 09:20:55 thread -- scripts/common.sh@345 -- # : 1 00:10:18.384 09:20:55 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:18.384 09:20:55 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:18.384 09:20:55 thread -- scripts/common.sh@365 -- # decimal 1 00:10:18.384 09:20:55 thread -- scripts/common.sh@353 -- # local d=1 00:10:18.384 09:20:55 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:18.384 09:20:55 thread -- scripts/common.sh@355 -- # echo 1 00:10:18.384 09:20:55 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:10:18.384 09:20:55 thread -- scripts/common.sh@366 -- # decimal 2 00:10:18.384 09:20:55 thread -- scripts/common.sh@353 -- # local d=2 00:10:18.384 09:20:55 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:18.384 09:20:55 thread -- scripts/common.sh@355 -- # echo 2 00:10:18.384 09:20:55 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:10:18.384 09:20:55 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:18.384 09:20:55 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:18.384 09:20:55 thread -- scripts/common.sh@368 -- # return 0 00:10:18.384 09:20:55 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:18.384 09:20:55 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:18.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.384 --rc genhtml_branch_coverage=1 00:10:18.384 --rc genhtml_function_coverage=1 00:10:18.384 --rc genhtml_legend=1 00:10:18.384 --rc geninfo_all_blocks=1 00:10:18.384 --rc geninfo_unexecuted_blocks=1 00:10:18.384 00:10:18.384 ' 00:10:18.384 09:20:55 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:18.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.384 --rc genhtml_branch_coverage=1 00:10:18.384 --rc genhtml_function_coverage=1 00:10:18.384 --rc genhtml_legend=1 00:10:18.384 --rc geninfo_all_blocks=1 00:10:18.384 --rc geninfo_unexecuted_blocks=1 00:10:18.384 00:10:18.384 ' 00:10:18.384 09:20:55 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:18.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.384 --rc genhtml_branch_coverage=1 00:10:18.384 --rc genhtml_function_coverage=1 00:10:18.384 --rc genhtml_legend=1 00:10:18.384 --rc geninfo_all_blocks=1 00:10:18.384 --rc geninfo_unexecuted_blocks=1 00:10:18.384 00:10:18.384 ' 00:10:18.384 09:20:55 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:18.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.384 --rc genhtml_branch_coverage=1 00:10:18.384 --rc genhtml_function_coverage=1 00:10:18.384 --rc genhtml_legend=1 00:10:18.384 --rc geninfo_all_blocks=1 00:10:18.384 --rc geninfo_unexecuted_blocks=1 00:10:18.384 00:10:18.384 ' 00:10:18.384 09:20:55 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:18.384 09:20:55 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:10:18.384 09:20:55 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:18.384 09:20:55 thread -- common/autotest_common.sh@10 -- # set +x 00:10:18.384 ************************************ 00:10:18.384 START TEST thread_poller_perf 00:10:18.384 ************************************ 00:10:18.384 09:20:55 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:18.384 [2024-12-09 09:20:55.935870] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:18.384 [2024-12-09 09:20:55.935960] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59154 ] 00:10:18.384 [2024-12-09 09:20:56.087610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.643 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:10:18.644 [2024-12-09 09:20:56.138591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.578 [2024-12-09T09:20:57.301Z] ====================================== 00:10:19.578 [2024-12-09T09:20:57.301Z] busy:2497048930 (cyc) 00:10:19.578 [2024-12-09T09:20:57.301Z] total_run_count: 402000 00:10:19.578 [2024-12-09T09:20:57.301Z] tsc_hz: 2490000000 (cyc) 00:10:19.578 [2024-12-09T09:20:57.301Z] ====================================== 00:10:19.578 [2024-12-09T09:20:57.301Z] poller_cost: 6211 (cyc), 2494 (nsec) 00:10:19.578 00:10:19.578 real 0m1.274s 00:10:19.578 user 0m1.124s 00:10:19.578 sys 0m0.045s 00:10:19.578 09:20:57 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.578 09:20:57 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:19.578 ************************************ 00:10:19.578 END TEST thread_poller_perf 00:10:19.578 ************************************ 00:10:19.578 09:20:57 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:19.578 09:20:57 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:10:19.578 09:20:57 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:19.578 09:20:57 thread -- common/autotest_common.sh@10 -- # set +x 00:10:19.578 ************************************ 00:10:19.578 START TEST thread_poller_perf 00:10:19.578 ************************************ 00:10:19.578 09:20:57 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:19.578 [2024-12-09 09:20:57.282682] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:19.578 [2024-12-09 09:20:57.282762] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59195 ] 00:10:19.837 [2024-12-09 09:20:57.435754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.837 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:10:19.837 [2024-12-09 09:20:57.478837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.213 [2024-12-09T09:20:58.936Z] ====================================== 00:10:21.213 [2024-12-09T09:20:58.936Z] busy:2491766540 (cyc) 00:10:21.213 [2024-12-09T09:20:58.936Z] total_run_count: 4925000 00:10:21.213 [2024-12-09T09:20:58.936Z] tsc_hz: 2490000000 (cyc) 00:10:21.213 [2024-12-09T09:20:58.936Z] ====================================== 00:10:21.213 [2024-12-09T09:20:58.936Z] poller_cost: 505 (cyc), 202 (nsec) 00:10:21.213 00:10:21.213 real 0m1.263s 00:10:21.213 user 0m1.110s 00:10:21.213 sys 0m0.047s 00:10:21.213 09:20:58 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:21.213 ************************************ 00:10:21.213 END TEST thread_poller_perf 00:10:21.213 ************************************ 00:10:21.213 09:20:58 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:21.213 09:20:58 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:10:21.213 ************************************ 00:10:21.213 END TEST thread 00:10:21.213 ************************************ 00:10:21.213 00:10:21.213 real 0m2.904s 00:10:21.213 user 0m2.406s 00:10:21.213 sys 0m0.300s 00:10:21.213 09:20:58 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:21.213 09:20:58 thread -- common/autotest_common.sh@10 -- # set +x 00:10:21.213 09:20:58 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:10:21.213 09:20:58 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:21.213 09:20:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:21.213 09:20:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:21.213 09:20:58 -- common/autotest_common.sh@10 -- # set +x 00:10:21.213 ************************************ 00:10:21.213 START TEST app_cmdline 00:10:21.213 ************************************ 00:10:21.213 09:20:58 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:21.213 * Looking for test storage... 00:10:21.213 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:10:21.213 09:20:58 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:21.213 09:20:58 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:10:21.213 09:20:58 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:21.213 09:20:58 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:21.213 09:20:58 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:21.213 09:20:58 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:21.213 09:20:58 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:21.213 09:20:58 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:10:21.213 09:20:58 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:10:21.213 09:20:58 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:10:21.213 09:20:58 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:10:21.213 09:20:58 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:10:21.213 09:20:58 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:10:21.213 09:20:58 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:10:21.213 09:20:58 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:21.214 09:20:58 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:10:21.214 09:20:58 app_cmdline -- scripts/common.sh@345 -- # : 1 00:10:21.214 09:20:58 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:21.214 09:20:58 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:21.214 09:20:58 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:10:21.214 09:20:58 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:10:21.214 09:20:58 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:21.214 09:20:58 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:10:21.214 09:20:58 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:10:21.214 09:20:58 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:10:21.214 09:20:58 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:10:21.214 09:20:58 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:21.214 09:20:58 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:10:21.214 09:20:58 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:10:21.214 09:20:58 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:21.214 09:20:58 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:21.214 09:20:58 app_cmdline -- scripts/common.sh@368 -- # return 0 00:10:21.214 09:20:58 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:21.214 09:20:58 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:21.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.214 --rc genhtml_branch_coverage=1 00:10:21.214 --rc genhtml_function_coverage=1 00:10:21.214 --rc genhtml_legend=1 00:10:21.214 --rc geninfo_all_blocks=1 00:10:21.214 --rc geninfo_unexecuted_blocks=1 00:10:21.214 00:10:21.214 ' 00:10:21.214 09:20:58 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:21.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.214 --rc genhtml_branch_coverage=1 00:10:21.214 --rc genhtml_function_coverage=1 00:10:21.214 --rc genhtml_legend=1 00:10:21.214 --rc geninfo_all_blocks=1 00:10:21.214 --rc geninfo_unexecuted_blocks=1 00:10:21.214 00:10:21.214 ' 00:10:21.214 09:20:58 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:21.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.214 --rc genhtml_branch_coverage=1 00:10:21.214 --rc genhtml_function_coverage=1 00:10:21.214 --rc genhtml_legend=1 00:10:21.214 --rc geninfo_all_blocks=1 00:10:21.214 --rc geninfo_unexecuted_blocks=1 00:10:21.214 00:10:21.214 ' 00:10:21.214 09:20:58 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:21.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.214 --rc genhtml_branch_coverage=1 00:10:21.214 --rc genhtml_function_coverage=1 00:10:21.214 --rc genhtml_legend=1 00:10:21.214 --rc geninfo_all_blocks=1 00:10:21.214 --rc geninfo_unexecuted_blocks=1 00:10:21.214 00:10:21.214 ' 00:10:21.214 09:20:58 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:10:21.214 09:20:58 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59272 00:10:21.214 09:20:58 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:10:21.214 09:20:58 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59272 00:10:21.214 09:20:58 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59272 ']' 00:10:21.214 09:20:58 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.214 09:20:58 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:21.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.214 09:20:58 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.214 09:20:58 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:21.214 09:20:58 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:21.214 [2024-12-09 09:20:58.931817] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:21.214 [2024-12-09 09:20:58.931896] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59272 ] 00:10:21.472 [2024-12-09 09:20:59.080820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.472 [2024-12-09 09:20:59.131244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.472 [2024-12-09 09:20:59.187475] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:22.404 09:20:59 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:22.404 09:20:59 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:10:22.404 09:20:59 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:10:22.404 { 00:10:22.404 "version": "SPDK v25.01-pre git sha1 496bfd677", 00:10:22.404 "fields": { 00:10:22.404 "major": 25, 00:10:22.404 "minor": 1, 00:10:22.404 "patch": 0, 00:10:22.404 "suffix": "-pre", 00:10:22.404 "commit": "496bfd677" 00:10:22.404 } 00:10:22.404 } 00:10:22.404 09:20:59 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:10:22.404 09:20:59 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:10:22.404 09:20:59 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:10:22.404 09:20:59 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:10:22.404 09:20:59 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:10:22.404 09:20:59 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:10:22.404 09:20:59 app_cmdline -- app/cmdline.sh@26 -- # sort 00:10:22.404 09:21:00 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.404 09:21:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:22.404 09:21:00 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.404 09:21:00 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:10:22.404 09:21:00 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:10:22.404 09:21:00 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:22.404 09:21:00 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:10:22.404 09:21:00 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:22.404 09:21:00 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:22.404 09:21:00 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:22.404 09:21:00 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:22.404 09:21:00 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:22.404 09:21:00 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:22.404 09:21:00 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:22.404 09:21:00 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:22.404 09:21:00 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:22.404 09:21:00 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:22.662 request: 00:10:22.662 { 00:10:22.662 "method": "env_dpdk_get_mem_stats", 00:10:22.662 "req_id": 1 00:10:22.662 } 00:10:22.662 Got JSON-RPC error response 00:10:22.662 response: 00:10:22.662 { 00:10:22.662 "code": -32601, 00:10:22.662 "message": "Method not found" 00:10:22.662 } 00:10:22.662 09:21:00 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:10:22.662 09:21:00 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:22.662 09:21:00 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:22.662 09:21:00 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:22.662 09:21:00 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59272 00:10:22.662 09:21:00 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59272 ']' 00:10:22.662 09:21:00 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59272 00:10:22.662 09:21:00 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:10:22.662 09:21:00 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:22.662 09:21:00 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59272 00:10:22.662 09:21:00 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:22.662 killing process with pid 59272 00:10:22.662 09:21:00 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:22.662 09:21:00 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59272' 00:10:22.662 09:21:00 app_cmdline -- common/autotest_common.sh@973 -- # kill 59272 00:10:22.662 09:21:00 app_cmdline -- common/autotest_common.sh@978 -- # wait 59272 00:10:23.230 00:10:23.230 real 0m2.071s 00:10:23.230 user 0m2.461s 00:10:23.230 sys 0m0.496s 00:10:23.230 09:21:00 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:23.230 09:21:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:23.230 ************************************ 00:10:23.230 END TEST app_cmdline 00:10:23.230 ************************************ 00:10:23.230 09:21:00 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:10:23.230 09:21:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:23.230 09:21:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:23.230 09:21:00 -- common/autotest_common.sh@10 -- # set +x 00:10:23.230 ************************************ 00:10:23.230 START TEST version 00:10:23.230 ************************************ 00:10:23.230 09:21:00 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:10:23.230 * Looking for test storage... 00:10:23.230 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:10:23.230 09:21:00 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:23.230 09:21:00 version -- common/autotest_common.sh@1711 -- # lcov --version 00:10:23.230 09:21:00 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:23.490 09:21:01 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:23.490 09:21:01 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:23.490 09:21:01 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:23.490 09:21:01 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:23.490 09:21:01 version -- scripts/common.sh@336 -- # IFS=.-: 00:10:23.490 09:21:01 version -- scripts/common.sh@336 -- # read -ra ver1 00:10:23.490 09:21:01 version -- scripts/common.sh@337 -- # IFS=.-: 00:10:23.490 09:21:01 version -- scripts/common.sh@337 -- # read -ra ver2 00:10:23.490 09:21:01 version -- scripts/common.sh@338 -- # local 'op=<' 00:10:23.490 09:21:01 version -- scripts/common.sh@340 -- # ver1_l=2 00:10:23.490 09:21:01 version -- scripts/common.sh@341 -- # ver2_l=1 00:10:23.490 09:21:01 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:23.490 09:21:01 version -- scripts/common.sh@344 -- # case "$op" in 00:10:23.490 09:21:01 version -- scripts/common.sh@345 -- # : 1 00:10:23.490 09:21:01 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:23.490 09:21:01 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:23.490 09:21:01 version -- scripts/common.sh@365 -- # decimal 1 00:10:23.490 09:21:01 version -- scripts/common.sh@353 -- # local d=1 00:10:23.490 09:21:01 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:23.490 09:21:01 version -- scripts/common.sh@355 -- # echo 1 00:10:23.490 09:21:01 version -- scripts/common.sh@365 -- # ver1[v]=1 00:10:23.490 09:21:01 version -- scripts/common.sh@366 -- # decimal 2 00:10:23.490 09:21:01 version -- scripts/common.sh@353 -- # local d=2 00:10:23.490 09:21:01 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:23.490 09:21:01 version -- scripts/common.sh@355 -- # echo 2 00:10:23.490 09:21:01 version -- scripts/common.sh@366 -- # ver2[v]=2 00:10:23.490 09:21:01 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:23.490 09:21:01 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:23.490 09:21:01 version -- scripts/common.sh@368 -- # return 0 00:10:23.490 09:21:01 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:23.490 09:21:01 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:23.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.490 --rc genhtml_branch_coverage=1 00:10:23.490 --rc genhtml_function_coverage=1 00:10:23.490 --rc genhtml_legend=1 00:10:23.490 --rc geninfo_all_blocks=1 00:10:23.490 --rc geninfo_unexecuted_blocks=1 00:10:23.490 00:10:23.490 ' 00:10:23.490 09:21:01 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:23.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.490 --rc genhtml_branch_coverage=1 00:10:23.490 --rc genhtml_function_coverage=1 00:10:23.490 --rc genhtml_legend=1 00:10:23.490 --rc geninfo_all_blocks=1 00:10:23.490 --rc geninfo_unexecuted_blocks=1 00:10:23.490 00:10:23.490 ' 00:10:23.490 09:21:01 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:23.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.490 --rc genhtml_branch_coverage=1 00:10:23.490 --rc genhtml_function_coverage=1 00:10:23.490 --rc genhtml_legend=1 00:10:23.490 --rc geninfo_all_blocks=1 00:10:23.490 --rc geninfo_unexecuted_blocks=1 00:10:23.490 00:10:23.490 ' 00:10:23.490 09:21:01 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:23.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.490 --rc genhtml_branch_coverage=1 00:10:23.490 --rc genhtml_function_coverage=1 00:10:23.490 --rc genhtml_legend=1 00:10:23.490 --rc geninfo_all_blocks=1 00:10:23.490 --rc geninfo_unexecuted_blocks=1 00:10:23.490 00:10:23.490 ' 00:10:23.490 09:21:01 version -- app/version.sh@17 -- # get_header_version major 00:10:23.490 09:21:01 version -- app/version.sh@14 -- # cut -f2 00:10:23.490 09:21:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:23.490 09:21:01 version -- app/version.sh@14 -- # tr -d '"' 00:10:23.490 09:21:01 version -- app/version.sh@17 -- # major=25 00:10:23.490 09:21:01 version -- app/version.sh@18 -- # get_header_version minor 00:10:23.490 09:21:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:23.490 09:21:01 version -- app/version.sh@14 -- # cut -f2 00:10:23.490 09:21:01 version -- app/version.sh@14 -- # tr -d '"' 00:10:23.490 09:21:01 version -- app/version.sh@18 -- # minor=1 00:10:23.490 09:21:01 version -- app/version.sh@19 -- # get_header_version patch 00:10:23.490 09:21:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:23.490 09:21:01 version -- app/version.sh@14 -- # cut -f2 00:10:23.490 09:21:01 version -- app/version.sh@14 -- # tr -d '"' 00:10:23.490 09:21:01 version -- app/version.sh@19 -- # patch=0 00:10:23.490 09:21:01 version -- app/version.sh@20 -- # get_header_version suffix 00:10:23.490 09:21:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:23.490 09:21:01 version -- app/version.sh@14 -- # cut -f2 00:10:23.490 09:21:01 version -- app/version.sh@14 -- # tr -d '"' 00:10:23.490 09:21:01 version -- app/version.sh@20 -- # suffix=-pre 00:10:23.490 09:21:01 version -- app/version.sh@22 -- # version=25.1 00:10:23.490 09:21:01 version -- app/version.sh@25 -- # (( patch != 0 )) 00:10:23.490 09:21:01 version -- app/version.sh@28 -- # version=25.1rc0 00:10:23.490 09:21:01 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:10:23.490 09:21:01 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:10:23.490 09:21:01 version -- app/version.sh@30 -- # py_version=25.1rc0 00:10:23.491 09:21:01 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:10:23.491 00:10:23.491 real 0m0.334s 00:10:23.491 user 0m0.199s 00:10:23.491 sys 0m0.196s 00:10:23.491 09:21:01 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:23.491 09:21:01 version -- common/autotest_common.sh@10 -- # set +x 00:10:23.491 ************************************ 00:10:23.491 END TEST version 00:10:23.491 ************************************ 00:10:23.491 09:21:01 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:10:23.491 09:21:01 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:10:23.491 09:21:01 -- spdk/autotest.sh@194 -- # uname -s 00:10:23.491 09:21:01 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:10:23.491 09:21:01 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:10:23.491 09:21:01 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:10:23.491 09:21:01 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:10:23.491 09:21:01 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:10:23.491 09:21:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:23.491 09:21:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:23.491 09:21:01 -- common/autotest_common.sh@10 -- # set +x 00:10:23.750 ************************************ 00:10:23.750 START TEST spdk_dd 00:10:23.750 ************************************ 00:10:23.750 09:21:01 spdk_dd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:10:23.750 * Looking for test storage... 00:10:23.750 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:10:23.750 09:21:01 spdk_dd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:23.750 09:21:01 spdk_dd -- common/autotest_common.sh@1711 -- # lcov --version 00:10:23.750 09:21:01 spdk_dd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:23.750 09:21:01 spdk_dd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:23.750 09:21:01 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:23.750 09:21:01 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:23.750 09:21:01 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:23.750 09:21:01 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:10:23.750 09:21:01 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:10:23.750 09:21:01 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:10:23.750 09:21:01 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:10:23.750 09:21:01 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:10:23.750 09:21:01 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:10:23.750 09:21:01 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:10:23.750 09:21:01 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:23.750 09:21:01 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:10:23.750 09:21:01 spdk_dd -- scripts/common.sh@345 -- # : 1 00:10:23.750 09:21:01 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:23.750 09:21:01 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:23.750 09:21:01 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:10:23.750 09:21:01 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:10:23.750 09:21:01 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:23.750 09:21:01 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:10:23.750 09:21:01 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:10:23.750 09:21:01 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:10:23.750 09:21:01 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:10:23.750 09:21:01 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:23.750 09:21:01 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:10:23.750 09:21:01 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:10:23.750 09:21:01 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:23.750 09:21:01 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:23.750 09:21:01 spdk_dd -- scripts/common.sh@368 -- # return 0 00:10:23.750 09:21:01 spdk_dd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:23.750 09:21:01 spdk_dd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:23.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.750 --rc genhtml_branch_coverage=1 00:10:23.750 --rc genhtml_function_coverage=1 00:10:23.750 --rc genhtml_legend=1 00:10:23.750 --rc geninfo_all_blocks=1 00:10:23.750 --rc geninfo_unexecuted_blocks=1 00:10:23.750 00:10:23.750 ' 00:10:23.750 09:21:01 spdk_dd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:23.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.750 --rc genhtml_branch_coverage=1 00:10:23.750 --rc genhtml_function_coverage=1 00:10:23.750 --rc genhtml_legend=1 00:10:23.750 --rc geninfo_all_blocks=1 00:10:23.750 --rc geninfo_unexecuted_blocks=1 00:10:23.750 00:10:23.750 ' 00:10:23.750 09:21:01 spdk_dd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:23.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.750 --rc genhtml_branch_coverage=1 00:10:23.750 --rc genhtml_function_coverage=1 00:10:23.750 --rc genhtml_legend=1 00:10:23.750 --rc geninfo_all_blocks=1 00:10:23.750 --rc geninfo_unexecuted_blocks=1 00:10:23.750 00:10:23.750 ' 00:10:23.750 09:21:01 spdk_dd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:23.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.750 --rc genhtml_branch_coverage=1 00:10:23.750 --rc genhtml_function_coverage=1 00:10:23.750 --rc genhtml_legend=1 00:10:23.750 --rc geninfo_all_blocks=1 00:10:23.750 --rc geninfo_unexecuted_blocks=1 00:10:23.750 00:10:23.750 ' 00:10:23.750 09:21:01 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:23.751 09:21:01 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:10:24.010 09:21:01 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:24.010 09:21:01 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:24.010 09:21:01 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:24.010 09:21:01 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.010 09:21:01 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.010 09:21:01 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.010 09:21:01 spdk_dd -- paths/export.sh@5 -- # export PATH 00:10:24.010 09:21:01 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.010 09:21:01 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:24.269 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:24.530 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:24.530 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:24.530 09:21:02 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:10:24.530 09:21:02 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:10:24.530 09:21:02 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:10:24.530 09:21:02 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:10:24.530 09:21:02 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:10:24.530 09:21:02 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:10:24.530 09:21:02 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:10:24.530 09:21:02 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:10:24.530 09:21:02 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:10:24.530 09:21:02 spdk_dd -- scripts/common.sh@233 -- # local class 00:10:24.530 09:21:02 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:10:24.530 09:21:02 spdk_dd -- scripts/common.sh@235 -- # local progif 00:10:24.530 09:21:02 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:10:24.530 09:21:02 spdk_dd -- scripts/common.sh@236 -- # class=01 00:10:24.530 09:21:02 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:10:24.530 09:21:02 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:10:24.530 09:21:02 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:10:24.530 09:21:02 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:10:24.530 09:21:02 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:10:24.530 09:21:02 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:10:24.530 09:21:02 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:10:24.530 09:21:02 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:10:24.530 09:21:02 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:10:24.530 09:21:02 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:10:24.530 09:21:02 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:24.530 09:21:02 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:10:24.530 09:21:02 spdk_dd -- scripts/common.sh@18 -- # local i 00:10:24.530 09:21:02 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:24.530 09:21:02 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:24.530 09:21:02 spdk_dd -- scripts/common.sh@27 -- # return 0 00:10:24.530 09:21:02 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:10:24.530 09:21:02 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:24.530 09:21:02 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:10:24.530 09:21:02 spdk_dd -- scripts/common.sh@18 -- # local i 00:10:24.530 09:21:02 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:24.530 09:21:02 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:24.530 09:21:02 spdk_dd -- scripts/common.sh@27 -- # return 0 00:10:24.530 09:21:02 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:10:24.530 09:21:02 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:24.530 09:21:02 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:10:24.530 09:21:02 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:10:24.530 09:21:02 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:24.530 09:21:02 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:24.530 09:21:02 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:24.530 09:21:02 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:10:24.530 09:21:02 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:10:24.530 09:21:02 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:24.530 09:21:02 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:24.530 09:21:02 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:10:24.530 09:21:02 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:10:24.530 09:21:02 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:10:24.530 09:21:02 spdk_dd -- dd/common.sh@139 -- # local lib 00:10:24.530 09:21:02 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:10:24.530 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.530 09:21:02 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:24.530 09:21:02 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:10:24.530 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:10:24.530 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.530 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:10:24.530 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.530 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:10:24.530 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.530 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:10:24.530 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.530 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:10:24.530 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.530 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:10:24.530 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.530 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:10:24.530 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.530 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:10:24.530 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.530 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:10:24.530 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.530 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:10:24.530 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.530 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:10:24.530 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.530 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:10:24.530 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.530 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.11.0 == liburing.so.* ]] 00:10:24.530 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.530 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.12.0 == liburing.so.* ]] 00:10:24.530 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.530 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.11.0 == liburing.so.* ]] 00:10:24.530 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.530 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.12.0 == liburing.so.* ]] 00:10:24.530 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.530 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:10:24.530 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.530 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.7.0 == liburing.so.* ]] 00:10:24.530 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.530 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:10:24.530 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.530 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:10:24.530 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.530 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:10:24.530 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.530 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:10:24.530 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.530 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:10:24.530 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.530 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:10:24.530 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.530 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:10:24.531 * spdk_dd linked to liburing 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:10:24.531 09:21:02 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:10:24.531 09:21:02 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:24.531 09:21:02 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:24.531 09:21:02 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:24.531 09:21:02 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:24.531 09:21:02 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:10:24.531 09:21:02 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:24.531 09:21:02 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:24.531 09:21:02 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:24.531 09:21:02 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:24.531 09:21:02 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:24.531 09:21:02 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:24.531 09:21:02 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:24.531 09:21:02 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:24.531 09:21:02 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:24.531 09:21:02 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:24.531 09:21:02 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:24.531 09:21:02 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:24.531 09:21:02 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:24.531 09:21:02 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:24.532 09:21:02 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:10:24.532 09:21:02 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:10:24.532 09:21:02 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:10:24.532 09:21:02 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:10:24.532 09:21:02 spdk_dd -- dd/common.sh@153 -- # return 0 00:10:24.532 09:21:02 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:10:24.532 09:21:02 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:10:24.532 09:21:02 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:24.532 09:21:02 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:24.532 09:21:02 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:10:24.532 ************************************ 00:10:24.532 START TEST spdk_dd_basic_rw 00:10:24.532 ************************************ 00:10:24.532 09:21:02 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:10:24.790 * Looking for test storage... 00:10:24.791 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # lcov --version 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:24.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.791 --rc genhtml_branch_coverage=1 00:10:24.791 --rc genhtml_function_coverage=1 00:10:24.791 --rc genhtml_legend=1 00:10:24.791 --rc geninfo_all_blocks=1 00:10:24.791 --rc geninfo_unexecuted_blocks=1 00:10:24.791 00:10:24.791 ' 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:24.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.791 --rc genhtml_branch_coverage=1 00:10:24.791 --rc genhtml_function_coverage=1 00:10:24.791 --rc genhtml_legend=1 00:10:24.791 --rc geninfo_all_blocks=1 00:10:24.791 --rc geninfo_unexecuted_blocks=1 00:10:24.791 00:10:24.791 ' 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:24.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.791 --rc genhtml_branch_coverage=1 00:10:24.791 --rc genhtml_function_coverage=1 00:10:24.791 --rc genhtml_legend=1 00:10:24.791 --rc geninfo_all_blocks=1 00:10:24.791 --rc geninfo_unexecuted_blocks=1 00:10:24.791 00:10:24.791 ' 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:24.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.791 --rc genhtml_branch_coverage=1 00:10:24.791 --rc genhtml_function_coverage=1 00:10:24.791 --rc genhtml_legend=1 00:10:24.791 --rc geninfo_all_blocks=1 00:10:24.791 --rc geninfo_unexecuted_blocks=1 00:10:24.791 00:10:24.791 ' 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:10:24.791 09:21:02 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:10:25.067 09:21:02 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:10:25.067 09:21:02 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:10:25.068 09:21:02 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:10:25.068 09:21:02 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:10:25.068 09:21:02 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:10:25.068 09:21:02 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:10:25.068 09:21:02 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:10:25.068 09:21:02 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:10:25.068 09:21:02 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:10:25.068 09:21:02 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:25.068 09:21:02 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:10:25.068 09:21:02 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:25.068 09:21:02 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:10:25.068 09:21:02 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:10:25.068 ************************************ 00:10:25.068 START TEST dd_bs_lt_native_bs 00:10:25.068 ************************************ 00:10:25.068 09:21:02 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1129 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:10:25.068 09:21:02 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # local es=0 00:10:25.068 09:21:02 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:10:25.068 09:21:02 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:25.068 09:21:02 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:25.068 09:21:02 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:25.068 09:21:02 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:25.068 09:21:02 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:25.068 09:21:02 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:25.068 09:21:02 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:25.068 09:21:02 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:25.068 09:21:02 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:10:25.068 { 00:10:25.068 "subsystems": [ 00:10:25.068 { 00:10:25.068 "subsystem": "bdev", 00:10:25.068 "config": [ 00:10:25.068 { 00:10:25.068 "params": { 00:10:25.068 "trtype": "pcie", 00:10:25.068 "traddr": "0000:00:10.0", 00:10:25.068 "name": "Nvme0" 00:10:25.068 }, 00:10:25.068 "method": "bdev_nvme_attach_controller" 00:10:25.068 }, 00:10:25.068 { 00:10:25.068 "method": "bdev_wait_for_examine" 00:10:25.068 } 00:10:25.068 ] 00:10:25.068 } 00:10:25.068 ] 00:10:25.068 } 00:10:25.068 [2024-12-09 09:21:02.745608] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:25.068 [2024-12-09 09:21:02.745672] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59629 ] 00:10:25.327 [2024-12-09 09:21:02.898928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.327 [2024-12-09 09:21:02.949394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.327 [2024-12-09 09:21:02.992352] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:25.586 [2024-12-09 09:21:03.095191] spdk_dd.c:1159:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:10:25.586 [2024-12-09 09:21:03.095267] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:25.586 [2024-12-09 09:21:03.201122] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:10:25.586 09:21:03 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # es=234 00:10:25.586 09:21:03 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:25.586 09:21:03 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@664 -- # es=106 00:10:25.586 09:21:03 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@665 -- # case "$es" in 00:10:25.586 09:21:03 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@672 -- # es=1 00:10:25.586 09:21:03 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:25.586 00:10:25.586 real 0m0.559s 00:10:25.586 user 0m0.375s 00:10:25.586 sys 0m0.144s 00:10:25.586 09:21:03 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:25.586 09:21:03 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:10:25.586 ************************************ 00:10:25.586 END TEST dd_bs_lt_native_bs 00:10:25.586 ************************************ 00:10:25.845 09:21:03 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:10:25.845 09:21:03 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:25.845 09:21:03 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:25.845 09:21:03 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:10:25.845 ************************************ 00:10:25.845 START TEST dd_rw 00:10:25.845 ************************************ 00:10:25.845 09:21:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1129 -- # basic_rw 4096 00:10:25.845 09:21:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:10:25.845 09:21:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:10:25.845 09:21:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:10:25.845 09:21:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:10:25.845 09:21:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:10:25.845 09:21:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:10:25.845 09:21:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:10:25.845 09:21:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:10:25.845 09:21:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:10:25.845 09:21:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:10:25.845 09:21:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:10:25.845 09:21:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:10:25.845 09:21:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:10:25.845 09:21:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:10:25.845 09:21:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:10:25.845 09:21:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:10:25.845 09:21:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:10:25.845 09:21:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:26.412 09:21:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:10:26.412 09:21:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:10:26.412 09:21:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:26.412 09:21:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:26.412 [2024-12-09 09:21:03.938047] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:26.412 [2024-12-09 09:21:03.938124] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59660 ] 00:10:26.412 { 00:10:26.412 "subsystems": [ 00:10:26.412 { 00:10:26.412 "subsystem": "bdev", 00:10:26.412 "config": [ 00:10:26.412 { 00:10:26.412 "params": { 00:10:26.412 "trtype": "pcie", 00:10:26.412 "traddr": "0000:00:10.0", 00:10:26.412 "name": "Nvme0" 00:10:26.412 }, 00:10:26.412 "method": "bdev_nvme_attach_controller" 00:10:26.412 }, 00:10:26.412 { 00:10:26.412 "method": "bdev_wait_for_examine" 00:10:26.412 } 00:10:26.412 ] 00:10:26.412 } 00:10:26.412 ] 00:10:26.412 } 00:10:26.412 [2024-12-09 09:21:04.091114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.670 [2024-12-09 09:21:04.143896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.670 [2024-12-09 09:21:04.187376] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:26.670  [2024-12-09T09:21:04.651Z] Copying: 60/60 [kB] (average 29 MBps) 00:10:26.928 00:10:26.928 09:21:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:10:26.928 09:21:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:10:26.928 09:21:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:26.928 09:21:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:26.928 [2024-12-09 09:21:04.523764] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:26.928 [2024-12-09 09:21:04.523858] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59674 ] 00:10:26.928 { 00:10:26.928 "subsystems": [ 00:10:26.928 { 00:10:26.928 "subsystem": "bdev", 00:10:26.928 "config": [ 00:10:26.928 { 00:10:26.928 "params": { 00:10:26.928 "trtype": "pcie", 00:10:26.928 "traddr": "0000:00:10.0", 00:10:26.928 "name": "Nvme0" 00:10:26.928 }, 00:10:26.928 "method": "bdev_nvme_attach_controller" 00:10:26.928 }, 00:10:26.928 { 00:10:26.928 "method": "bdev_wait_for_examine" 00:10:26.928 } 00:10:26.928 ] 00:10:26.928 } 00:10:26.928 ] 00:10:26.928 } 00:10:27.185 [2024-12-09 09:21:04.677436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.185 [2024-12-09 09:21:04.726003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.185 [2024-12-09 09:21:04.768843] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:27.185  [2024-12-09T09:21:05.165Z] Copying: 60/60 [kB] (average 19 MBps) 00:10:27.442 00:10:27.442 09:21:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:27.442 09:21:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:10:27.442 09:21:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:10:27.442 09:21:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:10:27.442 09:21:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:10:27.442 09:21:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:10:27.442 09:21:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:10:27.442 09:21:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:10:27.442 09:21:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:10:27.442 09:21:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:27.442 09:21:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:27.442 [2024-12-09 09:21:05.101022] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:27.442 [2024-12-09 09:21:05.101103] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59689 ] 00:10:27.442 { 00:10:27.442 "subsystems": [ 00:10:27.442 { 00:10:27.442 "subsystem": "bdev", 00:10:27.442 "config": [ 00:10:27.442 { 00:10:27.442 "params": { 00:10:27.442 "trtype": "pcie", 00:10:27.442 "traddr": "0000:00:10.0", 00:10:27.442 "name": "Nvme0" 00:10:27.442 }, 00:10:27.442 "method": "bdev_nvme_attach_controller" 00:10:27.442 }, 00:10:27.442 { 00:10:27.442 "method": "bdev_wait_for_examine" 00:10:27.442 } 00:10:27.442 ] 00:10:27.442 } 00:10:27.442 ] 00:10:27.442 } 00:10:27.699 [2024-12-09 09:21:05.257333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.699 [2024-12-09 09:21:05.325870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.699 [2024-12-09 09:21:05.368727] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:27.957  [2024-12-09T09:21:05.680Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:10:27.957 00:10:27.957 09:21:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:10:27.957 09:21:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:10:27.957 09:21:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:10:27.957 09:21:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:10:27.957 09:21:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:10:27.957 09:21:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:10:27.957 09:21:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:28.563 09:21:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:10:28.563 09:21:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:10:28.563 09:21:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:28.563 09:21:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:28.563 [2024-12-09 09:21:06.177683] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:28.563 [2024-12-09 09:21:06.177754] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59708 ] 00:10:28.563 { 00:10:28.563 "subsystems": [ 00:10:28.563 { 00:10:28.563 "subsystem": "bdev", 00:10:28.563 "config": [ 00:10:28.563 { 00:10:28.563 "params": { 00:10:28.563 "trtype": "pcie", 00:10:28.563 "traddr": "0000:00:10.0", 00:10:28.563 "name": "Nvme0" 00:10:28.563 }, 00:10:28.563 "method": "bdev_nvme_attach_controller" 00:10:28.563 }, 00:10:28.563 { 00:10:28.563 "method": "bdev_wait_for_examine" 00:10:28.563 } 00:10:28.563 ] 00:10:28.563 } 00:10:28.563 ] 00:10:28.563 } 00:10:28.834 [2024-12-09 09:21:06.329037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.834 [2024-12-09 09:21:06.377738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.834 [2024-12-09 09:21:06.419566] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:28.834  [2024-12-09T09:21:06.826Z] Copying: 60/60 [kB] (average 58 MBps) 00:10:29.103 00:10:29.103 09:21:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:10:29.103 09:21:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:10:29.103 09:21:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:29.103 09:21:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:29.103 { 00:10:29.103 "subsystems": [ 00:10:29.103 { 00:10:29.103 "subsystem": "bdev", 00:10:29.103 "config": [ 00:10:29.103 { 00:10:29.103 "params": { 00:10:29.103 "trtype": "pcie", 00:10:29.103 "traddr": "0000:00:10.0", 00:10:29.103 "name": "Nvme0" 00:10:29.103 }, 00:10:29.103 "method": "bdev_nvme_attach_controller" 00:10:29.103 }, 00:10:29.103 { 00:10:29.103 "method": "bdev_wait_for_examine" 00:10:29.103 } 00:10:29.103 ] 00:10:29.103 } 00:10:29.103 ] 00:10:29.103 } 00:10:29.103 [2024-12-09 09:21:06.732985] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:29.103 [2024-12-09 09:21:06.733049] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59726 ] 00:10:29.362 [2024-12-09 09:21:06.867306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.362 [2024-12-09 09:21:06.924839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.362 [2024-12-09 09:21:06.977968] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:29.620  [2024-12-09T09:21:07.343Z] Copying: 60/60 [kB] (average 58 MBps) 00:10:29.620 00:10:29.620 09:21:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:29.620 09:21:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:10:29.620 09:21:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:10:29.620 09:21:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:10:29.620 09:21:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:10:29.620 09:21:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:10:29.620 09:21:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:10:29.620 09:21:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:10:29.620 09:21:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:10:29.620 09:21:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:29.620 09:21:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:29.620 [2024-12-09 09:21:07.296597] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:29.620 [2024-12-09 09:21:07.296697] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59737 ] 00:10:29.620 { 00:10:29.620 "subsystems": [ 00:10:29.620 { 00:10:29.620 "subsystem": "bdev", 00:10:29.620 "config": [ 00:10:29.620 { 00:10:29.620 "params": { 00:10:29.620 "trtype": "pcie", 00:10:29.620 "traddr": "0000:00:10.0", 00:10:29.620 "name": "Nvme0" 00:10:29.620 }, 00:10:29.621 "method": "bdev_nvme_attach_controller" 00:10:29.621 }, 00:10:29.621 { 00:10:29.621 "method": "bdev_wait_for_examine" 00:10:29.621 } 00:10:29.621 ] 00:10:29.621 } 00:10:29.621 ] 00:10:29.621 } 00:10:29.878 [2024-12-09 09:21:07.449008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.878 [2024-12-09 09:21:07.499049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.878 [2024-12-09 09:21:07.542771] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:30.136  [2024-12-09T09:21:07.859Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:10:30.136 00:10:30.136 09:21:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:10:30.136 09:21:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:10:30.136 09:21:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:10:30.136 09:21:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:10:30.136 09:21:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:10:30.136 09:21:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:10:30.136 09:21:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:10:30.136 09:21:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:30.702 09:21:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:10:30.702 09:21:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:10:30.702 09:21:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:30.702 09:21:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:30.702 [2024-12-09 09:21:08.326054] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:30.702 [2024-12-09 09:21:08.326127] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59756 ] 00:10:30.702 { 00:10:30.702 "subsystems": [ 00:10:30.702 { 00:10:30.702 "subsystem": "bdev", 00:10:30.702 "config": [ 00:10:30.702 { 00:10:30.702 "params": { 00:10:30.702 "trtype": "pcie", 00:10:30.702 "traddr": "0000:00:10.0", 00:10:30.702 "name": "Nvme0" 00:10:30.702 }, 00:10:30.702 "method": "bdev_nvme_attach_controller" 00:10:30.702 }, 00:10:30.702 { 00:10:30.702 "method": "bdev_wait_for_examine" 00:10:30.702 } 00:10:30.702 ] 00:10:30.702 } 00:10:30.702 ] 00:10:30.702 } 00:10:30.961 [2024-12-09 09:21:08.475328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.961 [2024-12-09 09:21:08.523682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.961 [2024-12-09 09:21:08.567274] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:30.961  [2024-12-09T09:21:08.943Z] Copying: 56/56 [kB] (average 27 MBps) 00:10:31.220 00:10:31.220 09:21:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:10:31.220 09:21:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:10:31.220 09:21:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:31.220 09:21:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:31.220 [2024-12-09 09:21:08.888908] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:31.220 [2024-12-09 09:21:08.889019] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59775 ] 00:10:31.220 { 00:10:31.220 "subsystems": [ 00:10:31.220 { 00:10:31.220 "subsystem": "bdev", 00:10:31.220 "config": [ 00:10:31.220 { 00:10:31.220 "params": { 00:10:31.220 "trtype": "pcie", 00:10:31.220 "traddr": "0000:00:10.0", 00:10:31.220 "name": "Nvme0" 00:10:31.220 }, 00:10:31.220 "method": "bdev_nvme_attach_controller" 00:10:31.220 }, 00:10:31.220 { 00:10:31.220 "method": "bdev_wait_for_examine" 00:10:31.220 } 00:10:31.220 ] 00:10:31.220 } 00:10:31.220 ] 00:10:31.220 } 00:10:31.479 [2024-12-09 09:21:09.044569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.479 [2024-12-09 09:21:09.091772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.479 [2024-12-09 09:21:09.133919] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:31.738  [2024-12-09T09:21:09.461Z] Copying: 56/56 [kB] (average 27 MBps) 00:10:31.738 00:10:31.738 09:21:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:31.738 09:21:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:10:31.738 09:21:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:10:31.739 09:21:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:10:31.739 09:21:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:10:31.739 09:21:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:10:31.739 09:21:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:10:31.739 09:21:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:10:31.739 09:21:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:10:31.739 09:21:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:31.739 09:21:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:31.739 { 00:10:31.739 "subsystems": [ 00:10:31.739 { 00:10:31.739 "subsystem": "bdev", 00:10:31.739 "config": [ 00:10:31.739 { 00:10:31.739 "params": { 00:10:31.739 "trtype": "pcie", 00:10:31.739 "traddr": "0000:00:10.0", 00:10:31.739 "name": "Nvme0" 00:10:31.739 }, 00:10:31.739 "method": "bdev_nvme_attach_controller" 00:10:31.739 }, 00:10:31.739 { 00:10:31.739 "method": "bdev_wait_for_examine" 00:10:31.739 } 00:10:31.739 ] 00:10:31.739 } 00:10:31.739 ] 00:10:31.739 } 00:10:31.739 [2024-12-09 09:21:09.452527] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:31.739 [2024-12-09 09:21:09.452609] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59785 ] 00:10:31.999 [2024-12-09 09:21:09.603303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.999 [2024-12-09 09:21:09.650093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.999 [2024-12-09 09:21:09.692170] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:32.257  [2024-12-09T09:21:09.980Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:10:32.257 00:10:32.257 09:21:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:10:32.257 09:21:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:10:32.257 09:21:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:10:32.257 09:21:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:10:32.257 09:21:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:10:32.257 09:21:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:10:32.257 09:21:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:32.826 09:21:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:10:32.826 09:21:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:10:32.826 09:21:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:32.826 09:21:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:32.826 [2024-12-09 09:21:10.478413] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:32.826 [2024-12-09 09:21:10.478498] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59804 ] 00:10:32.826 { 00:10:32.826 "subsystems": [ 00:10:32.826 { 00:10:32.826 "subsystem": "bdev", 00:10:32.826 "config": [ 00:10:32.826 { 00:10:32.826 "params": { 00:10:32.826 "trtype": "pcie", 00:10:32.826 "traddr": "0000:00:10.0", 00:10:32.826 "name": "Nvme0" 00:10:32.826 }, 00:10:32.826 "method": "bdev_nvme_attach_controller" 00:10:32.826 }, 00:10:32.826 { 00:10:32.826 "method": "bdev_wait_for_examine" 00:10:32.826 } 00:10:32.826 ] 00:10:32.826 } 00:10:32.826 ] 00:10:32.826 } 00:10:33.085 [2024-12-09 09:21:10.628518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.085 [2024-12-09 09:21:10.683378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.085 [2024-12-09 09:21:10.726369] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:33.345  [2024-12-09T09:21:11.068Z] Copying: 56/56 [kB] (average 54 MBps) 00:10:33.345 00:10:33.345 09:21:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:10:33.345 09:21:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:10:33.345 09:21:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:33.345 09:21:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:33.345 [2024-12-09 09:21:11.040747] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:33.345 [2024-12-09 09:21:11.040961] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59823 ] 00:10:33.345 { 00:10:33.345 "subsystems": [ 00:10:33.345 { 00:10:33.345 "subsystem": "bdev", 00:10:33.345 "config": [ 00:10:33.345 { 00:10:33.345 "params": { 00:10:33.345 "trtype": "pcie", 00:10:33.345 "traddr": "0000:00:10.0", 00:10:33.345 "name": "Nvme0" 00:10:33.345 }, 00:10:33.345 "method": "bdev_nvme_attach_controller" 00:10:33.345 }, 00:10:33.345 { 00:10:33.345 "method": "bdev_wait_for_examine" 00:10:33.345 } 00:10:33.345 ] 00:10:33.345 } 00:10:33.345 ] 00:10:33.345 } 00:10:33.605 [2024-12-09 09:21:11.189294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.605 [2024-12-09 09:21:11.238506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.605 [2024-12-09 09:21:11.280743] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:33.864  [2024-12-09T09:21:11.587Z] Copying: 56/56 [kB] (average 54 MBps) 00:10:33.864 00:10:33.864 09:21:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:33.864 09:21:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:10:33.864 09:21:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:10:33.864 09:21:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:10:33.864 09:21:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:10:33.864 09:21:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:10:33.864 09:21:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:10:33.864 09:21:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:10:33.864 09:21:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:10:33.864 09:21:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:33.864 09:21:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:34.123 [2024-12-09 09:21:11.599240] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:34.123 [2024-12-09 09:21:11.599316] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59833 ] 00:10:34.123 { 00:10:34.123 "subsystems": [ 00:10:34.123 { 00:10:34.123 "subsystem": "bdev", 00:10:34.123 "config": [ 00:10:34.123 { 00:10:34.123 "params": { 00:10:34.123 "trtype": "pcie", 00:10:34.123 "traddr": "0000:00:10.0", 00:10:34.123 "name": "Nvme0" 00:10:34.123 }, 00:10:34.123 "method": "bdev_nvme_attach_controller" 00:10:34.123 }, 00:10:34.123 { 00:10:34.123 "method": "bdev_wait_for_examine" 00:10:34.123 } 00:10:34.123 ] 00:10:34.123 } 00:10:34.123 ] 00:10:34.123 } 00:10:34.123 [2024-12-09 09:21:11.750377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.123 [2024-12-09 09:21:11.799196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.123 [2024-12-09 09:21:11.841460] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:34.382  [2024-12-09T09:21:12.105Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:10:34.382 00:10:34.640 09:21:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:10:34.640 09:21:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:10:34.640 09:21:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:10:34.640 09:21:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:10:34.640 09:21:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:10:34.640 09:21:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:10:34.640 09:21:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:10:34.640 09:21:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:34.899 09:21:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:10:34.899 09:21:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:10:34.899 09:21:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:34.899 09:21:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:34.899 [2024-12-09 09:21:12.540424] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:34.899 [2024-12-09 09:21:12.540628] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59852 ] 00:10:34.899 { 00:10:34.899 "subsystems": [ 00:10:34.899 { 00:10:34.899 "subsystem": "bdev", 00:10:34.899 "config": [ 00:10:34.899 { 00:10:34.899 "params": { 00:10:34.899 "trtype": "pcie", 00:10:34.899 "traddr": "0000:00:10.0", 00:10:34.899 "name": "Nvme0" 00:10:34.899 }, 00:10:34.899 "method": "bdev_nvme_attach_controller" 00:10:34.899 }, 00:10:34.899 { 00:10:34.899 "method": "bdev_wait_for_examine" 00:10:34.899 } 00:10:34.899 ] 00:10:34.899 } 00:10:34.899 ] 00:10:34.899 } 00:10:35.157 [2024-12-09 09:21:12.689683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.157 [2024-12-09 09:21:12.736631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.157 [2024-12-09 09:21:12.778676] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:35.416  [2024-12-09T09:21:13.139Z] Copying: 48/48 [kB] (average 46 MBps) 00:10:35.416 00:10:35.416 09:21:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:10:35.416 09:21:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:10:35.416 09:21:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:35.416 09:21:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:35.416 [2024-12-09 09:21:13.098411] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:35.416 [2024-12-09 09:21:13.098495] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59870 ] 00:10:35.416 { 00:10:35.416 "subsystems": [ 00:10:35.416 { 00:10:35.416 "subsystem": "bdev", 00:10:35.416 "config": [ 00:10:35.416 { 00:10:35.416 "params": { 00:10:35.416 "trtype": "pcie", 00:10:35.416 "traddr": "0000:00:10.0", 00:10:35.416 "name": "Nvme0" 00:10:35.416 }, 00:10:35.416 "method": "bdev_nvme_attach_controller" 00:10:35.416 }, 00:10:35.416 { 00:10:35.416 "method": "bdev_wait_for_examine" 00:10:35.416 } 00:10:35.416 ] 00:10:35.416 } 00:10:35.416 ] 00:10:35.416 } 00:10:35.673 [2024-12-09 09:21:13.247549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.673 [2024-12-09 09:21:13.290910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.673 [2024-12-09 09:21:13.332559] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:35.931  [2024-12-09T09:21:13.654Z] Copying: 48/48 [kB] (average 46 MBps) 00:10:35.931 00:10:35.931 09:21:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:35.931 09:21:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:10:35.931 09:21:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:10:35.931 09:21:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:10:35.931 09:21:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:10:35.931 09:21:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:10:35.931 09:21:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:10:35.931 09:21:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:10:35.931 09:21:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:10:35.931 09:21:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:35.931 09:21:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:35.931 [2024-12-09 09:21:13.644054] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:35.931 [2024-12-09 09:21:13.644239] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59883 ] 00:10:35.931 { 00:10:35.931 "subsystems": [ 00:10:35.931 { 00:10:35.931 "subsystem": "bdev", 00:10:35.931 "config": [ 00:10:35.931 { 00:10:35.931 "params": { 00:10:35.931 "trtype": "pcie", 00:10:35.931 "traddr": "0000:00:10.0", 00:10:35.931 "name": "Nvme0" 00:10:35.931 }, 00:10:35.931 "method": "bdev_nvme_attach_controller" 00:10:35.931 }, 00:10:35.931 { 00:10:35.931 "method": "bdev_wait_for_examine" 00:10:35.931 } 00:10:35.931 ] 00:10:35.931 } 00:10:35.931 ] 00:10:35.931 } 00:10:36.187 [2024-12-09 09:21:13.780019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.187 [2024-12-09 09:21:13.834406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.187 [2024-12-09 09:21:13.877677] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:36.444  [2024-12-09T09:21:14.167Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:10:36.444 00:10:36.444 09:21:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:10:36.444 09:21:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:10:36.444 09:21:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:10:36.444 09:21:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:10:36.444 09:21:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:10:36.444 09:21:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:10:36.444 09:21:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:37.009 09:21:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:10:37.009 09:21:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:10:37.009 09:21:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:37.009 09:21:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:37.009 [2024-12-09 09:21:14.599335] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:37.009 [2024-12-09 09:21:14.599642] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59902 ] 00:10:37.009 { 00:10:37.009 "subsystems": [ 00:10:37.009 { 00:10:37.009 "subsystem": "bdev", 00:10:37.009 "config": [ 00:10:37.009 { 00:10:37.009 "params": { 00:10:37.009 "trtype": "pcie", 00:10:37.009 "traddr": "0000:00:10.0", 00:10:37.009 "name": "Nvme0" 00:10:37.009 }, 00:10:37.009 "method": "bdev_nvme_attach_controller" 00:10:37.009 }, 00:10:37.009 { 00:10:37.009 "method": "bdev_wait_for_examine" 00:10:37.009 } 00:10:37.009 ] 00:10:37.009 } 00:10:37.009 ] 00:10:37.009 } 00:10:37.267 [2024-12-09 09:21:14.750991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.267 [2024-12-09 09:21:14.816559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.267 [2024-12-09 09:21:14.858381] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:37.267  [2024-12-09T09:21:15.248Z] Copying: 48/48 [kB] (average 46 MBps) 00:10:37.525 00:10:37.525 09:21:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:10:37.525 09:21:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:10:37.525 09:21:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:37.525 09:21:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:37.525 [2024-12-09 09:21:15.171639] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:37.525 [2024-12-09 09:21:15.171706] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59916 ] 00:10:37.525 { 00:10:37.525 "subsystems": [ 00:10:37.525 { 00:10:37.525 "subsystem": "bdev", 00:10:37.525 "config": [ 00:10:37.525 { 00:10:37.525 "params": { 00:10:37.525 "trtype": "pcie", 00:10:37.525 "traddr": "0000:00:10.0", 00:10:37.525 "name": "Nvme0" 00:10:37.525 }, 00:10:37.525 "method": "bdev_nvme_attach_controller" 00:10:37.525 }, 00:10:37.525 { 00:10:37.525 "method": "bdev_wait_for_examine" 00:10:37.525 } 00:10:37.525 ] 00:10:37.525 } 00:10:37.525 ] 00:10:37.525 } 00:10:37.783 [2024-12-09 09:21:15.322743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.783 [2024-12-09 09:21:15.371544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.783 [2024-12-09 09:21:15.413531] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:38.042  [2024-12-09T09:21:15.765Z] Copying: 48/48 [kB] (average 46 MBps) 00:10:38.042 00:10:38.042 09:21:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:38.042 09:21:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:10:38.042 09:21:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:10:38.042 09:21:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:10:38.042 09:21:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:10:38.042 09:21:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:10:38.042 09:21:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:10:38.042 09:21:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:10:38.042 09:21:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:10:38.042 09:21:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:38.042 09:21:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:38.042 [2024-12-09 09:21:15.726656] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:38.042 [2024-12-09 09:21:15.726905] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59931 ] 00:10:38.042 { 00:10:38.042 "subsystems": [ 00:10:38.042 { 00:10:38.042 "subsystem": "bdev", 00:10:38.042 "config": [ 00:10:38.042 { 00:10:38.042 "params": { 00:10:38.042 "trtype": "pcie", 00:10:38.042 "traddr": "0000:00:10.0", 00:10:38.042 "name": "Nvme0" 00:10:38.042 }, 00:10:38.042 "method": "bdev_nvme_attach_controller" 00:10:38.042 }, 00:10:38.042 { 00:10:38.042 "method": "bdev_wait_for_examine" 00:10:38.042 } 00:10:38.042 ] 00:10:38.042 } 00:10:38.042 ] 00:10:38.042 } 00:10:38.299 [2024-12-09 09:21:15.874941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.299 [2024-12-09 09:21:15.926026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.299 [2024-12-09 09:21:15.968071] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:38.559  [2024-12-09T09:21:16.283Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:10:38.560 00:10:38.560 ************************************ 00:10:38.560 END TEST dd_rw 00:10:38.560 ************************************ 00:10:38.560 00:10:38.560 real 0m12.904s 00:10:38.560 user 0m9.060s 00:10:38.560 sys 0m4.958s 00:10:38.560 09:21:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:38.560 09:21:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:38.819 09:21:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:10:38.819 09:21:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:38.819 09:21:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:38.819 09:21:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:10:38.819 ************************************ 00:10:38.819 START TEST dd_rw_offset 00:10:38.819 ************************************ 00:10:38.819 09:21:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1129 -- # basic_offset 00:10:38.819 09:21:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:10:38.819 09:21:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:10:38.819 09:21:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:10:38.819 09:21:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:10:38.819 09:21:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:10:38.820 09:21:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=dajn5a6gffmvq7zsor70e0s3gw064wy5czepoe967n9gqz3hejxch8tlt6u3g3y3yogwzwlopxvve1yqakqza2pdi8pe6jmowst3lv59asi7kuta7ip5pa7w13v48fx194pnv04itg9fz2ulsh9u2d561jyzquggn7twj3geejvj9d376l4om6898dfzn5qhlew0wthmbbv3ccetqi1t5itr0dz3qcyqcr5zvqk2qwfnautk2jdch284cmfmphy3mn6se7bvqftfboamk7s4j0nl278iebzedod4ahzsi4wzmsmtgdo150cmcj89ezn32oaovro6q0rik56kkczxchuwhlvyvcjpjjhbhsb0rbef9pe00yju59duw43kpq5efj2iyprkjdr8ft6qokzb6uli01p271m2f31nt4hvfqm8377v4zqk1qqao8p56cgo0admglh11tnf8issuk8v8o21ywx2ab89pvj2kl3b84zsgkw0zkr1a4zked35xe2y6k1qxkg7aamwbd3n5vm71j60gm0p8lfir3soqboihygqgjlzr6vqyh3c5vqmmu6804yf7aryunk0y2jaw3qs3dhy00rmpowgkxi5zpxlx15bkpf5mzu8ra1j3ymroyylqqmj64620mnasknxtya6k5n9nxuw8lwlfwxsk8s23m0b3yb0lnybfk66l117kra53dvggsgcb7xauyb5eaox0imrc23danue0kzdyausp9e2hfarn79fsvuxhec7wotbbw1krjipb9vjpk227lj90sqqhnccn6unanp6h1unuw8n3e1a5jiiq225v93fxw9iz3xbo9y579nfztwbccykrdy01owpmq2513mgqg61dcvij11u361x2gx3qcb958zurlj2jgbgqgp457iuh4h0xvl0qadz7piiv42kgiqlrjijb2ueju6rwrpn5jw6bxzc04ox6ym5gz8vrlfr9t28kev9l48ovc2cq5v83dgg210zvv14r377f8lo9rswf7cupa6f7scftrzq4x171r8yfirce8uidvd02zvz6rrau81mqti3lxzboocbyldiuc1msbq8aik2ccyjm7dhmfav63xbea63s2bkizae6fznxtl94p23c7k0iwnwmljqdz0vj20q51ln3e2nqxi2sz5iy5sy4n13axpj6cqz62pc386etd4bfveudu250wjlrb8werqcg00zhrifkv6u7a51k5jyreqywvxx10ic9adjsjecj9z6jtq87zoxnhwxft3z8h8e08m8f9rye2zxve470lnuqri5hljatjeyexopk8ucrvg4o97aib1q6w6u7wx9hk02xwkohzhv2k8zstskmtglahol7m9we39oh4lfbzfwoqwkf3knpupk71rglfn44oacevp17zfbd1kd2z5eqpqqpc2cxquq02s3y83tnrpdvsjl1rjsdx710s50hnod83csr4ctw1mvt8b16j7dbguyyxy8le5nonxy8zeicv4h1490lmk2mo094p6dm8kkg811quse4vihhr2ounoqrk62tbm87bkt9olsa6rg2v9m2zaufmlnusz1lezvodck4jw8yn3pcggfdflv3kp205pjxwv1k4g38ggmoggxuirw1yenv2qmpj244kewlv16g3vdj0h7kn0zib8thumk28r6ddgag26j52mef9j8zk3gzi9xm9iof9wlt0o369skr3i3foj3o6xvqb4ytv4x757lpspmvy1ic6kb5dqbyvhfs0qet4e6bpcuq7r9d0lsc28ac1750i6gqx5i5vnw7s8ncnf6dkm7txwppuzf4ca9xj61v68ia7hru6hhhfunrngwfuas6ohg81mndad1su8xajgfh3lxo5mfzkl8q9d8axa7pdps7y35bq0xavucal1icleq5r6pjyx2ssop9j2g1h529zmdao1i9mvulzi5azso8zhti64ylnofj3s07z2byxwnhcmf0pks48jkr1gus1llhukk3tjb52l7jvxh003r2895mxhcw22bh6984h92h5wzahatnbcznkogh00pogpmjoh3do9mw9nq70dhqocxl7vbzn54nomle2n8z3sgoe8fzxptjjsbvl0341nzt7w9xex0vmk3m5akqed9022xd97u3773ptq1ws416db6b9hb6iprzaqrl5y968wxj5cuhocqr251yz0cn86vlxdvms8a8maei2autnp5lcj39xxpbyuhbcw10k86yi4v9u7ccj3u9wyrx9r8n7uibc4e42iaenod0m80r1zt2ex1jdj2jo918hsbijikfsfyh1swi7qtqqr5b8vrybyk5ojxrouucur3onrsasy4kymginqlva2xf0cfjs7f4mzobqcse4faulmrsbi4iq6iucdrppvzi7enasf1b0ofpkyn8xur3072k9jpgm5woso5i4xquhj8gx3w2cwer5pf5zygnuo51wlxjrq72lbp0ji164ysir5d2tgiklt1tm95n345so6w5nzq704l264kdyr62iszr8patxdoe7svwo5d6bzbgdphjfw3ftsd4ox7ucuz0bws68i46qgz0r1ngdwzfgustm9z10sw16cwz2ch7m8baqcavz1hh0jzjmavi69tvap6o8igen4093fska2ro595s4goj7fhbu9y2pk7ppl6lgn0iymy2uc643n9afokexks66sulx4c6jao6m5ii6v8uupz8sgghrt9oeoefnfy7ez6q2ntbx3q10ntuc0wxzp4akxrsd38tagz547g5t0inn1ei4o8bt1x1udgtknm6q6rbnqkm8xpsywnrerbdugliptu20x55c5q310pbeaty6rmkqrwon7pfpxnqhd8khrorfmrbh6hznx0xezvs66z81zrnj581fnjng16obcu5inqbm7jvuw6dxl0mvpx1bvvquxbngm1eww6f98mdfkqwxexrsjcc1787fkz6jozw8k5u1m6asw5fke8016jip0lhwjnwuek1cp5v7vntmx1aqkz5ptya47ls3ryj4e1p9rqcab82tp2j1ljytbhzpum71zayzyzxe03iebyigxibq1maggcr9dtgqno31nfhvhpmq5t3d4prowjmkxs5y63zkt9x7s82ngixxc743pglr88d0dagr91jhq7byiurgfd71yjmgvb89bdy1jx5tcwsninst5iusly2bnxccs5na4rk15ksroc5ftk8zewzrepq0j84k9o9h9e2h24zbzmjka9p8h4tqavf0byrjfnvn46b5kpqyxbof6obeurocabd3rjuvvbi1mhnxwtyc0b8amfyh0ukv7iylv6fiovzqmijz5zkh4ps4ufhqiuyd7h7towxdds5r25ck01ih40zvsz6zm4fb5sf2ghmpqhurvukj9h3b9foea0hb1ll2j4lzhz9m4o50q4fr7j1ayilz4j6hnq3mizwfbtswy0hkn0bclvpvhj6duty1yqmccepy58vyfcxr44z9bluy692v8ovbfaidcavxjmdx42l8xc0x72kb3ojy0hkce3nv18g1di6nb3o69xh0pz45m9940jfi2l60phwvn3y43m8cbyewc25pja9dnn5vd54chnoeq5aixuwz5pttuxiij1o5jumuyna0eh32o3eed38l35my9p6wbq1t2wxjpfpxiwi7yqvqg4a44hp9ajpm4zamqot975hsa08f8guh95cffpl3uuxg6cvopsydva7ma5sjyw1atd4c2d68f5kfrr4ihmacjrhy4wcmm14z183y92zgmvy1np939d8gl1hgmtqaghiow9d80x1c5slu5syu3t11cy9av4zcuy67lhtxjvt8oq4m67i82fg4267jqbfyy45k5vfpedsh0sguqvd3tra5wcau1p7o9ib8c9zzgitneqsnf96hi0vhuraypfmsn9bmtqte6thj5tynd1lgajid0umk9o04ebnz042onkab7v8xse1991kb04a0v7vc2ujqnv78saygrcpqgdc6ikzs0jtkkvmdog7xpny5319k22zh0x09j99ncajr5thsc1t4mr 00:10:38.820 09:21:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:10:38.820 09:21:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:10:38.820 09:21:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:10:38.820 09:21:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:10:38.820 [2024-12-09 09:21:16.405553] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:38.820 [2024-12-09 09:21:16.405775] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59967 ] 00:10:38.820 { 00:10:38.820 "subsystems": [ 00:10:38.820 { 00:10:38.820 "subsystem": "bdev", 00:10:38.820 "config": [ 00:10:38.820 { 00:10:38.820 "params": { 00:10:38.820 "trtype": "pcie", 00:10:38.820 "traddr": "0000:00:10.0", 00:10:38.820 "name": "Nvme0" 00:10:38.820 }, 00:10:38.820 "method": "bdev_nvme_attach_controller" 00:10:38.820 }, 00:10:38.820 { 00:10:38.820 "method": "bdev_wait_for_examine" 00:10:38.820 } 00:10:38.820 ] 00:10:38.820 } 00:10:38.820 ] 00:10:38.820 } 00:10:39.078 [2024-12-09 09:21:16.545967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.078 [2024-12-09 09:21:16.593422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.078 [2024-12-09 09:21:16.636116] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:39.078  [2024-12-09T09:21:17.059Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:10:39.336 00:10:39.336 09:21:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:10:39.336 09:21:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:10:39.337 09:21:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:10:39.337 09:21:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:10:39.337 [2024-12-09 09:21:16.947374] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:39.337 [2024-12-09 09:21:16.947450] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59981 ] 00:10:39.337 { 00:10:39.337 "subsystems": [ 00:10:39.337 { 00:10:39.337 "subsystem": "bdev", 00:10:39.337 "config": [ 00:10:39.337 { 00:10:39.337 "params": { 00:10:39.337 "trtype": "pcie", 00:10:39.337 "traddr": "0000:00:10.0", 00:10:39.337 "name": "Nvme0" 00:10:39.337 }, 00:10:39.337 "method": "bdev_nvme_attach_controller" 00:10:39.337 }, 00:10:39.337 { 00:10:39.337 "method": "bdev_wait_for_examine" 00:10:39.337 } 00:10:39.337 ] 00:10:39.337 } 00:10:39.337 ] 00:10:39.337 } 00:10:39.595 [2024-12-09 09:21:17.096748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.595 [2024-12-09 09:21:17.139451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.595 [2024-12-09 09:21:17.180663] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:39.595  [2024-12-09T09:21:17.577Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:10:39.854 00:10:39.854 09:21:17 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:10:39.855 09:21:17 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ dajn5a6gffmvq7zsor70e0s3gw064wy5czepoe967n9gqz3hejxch8tlt6u3g3y3yogwzwlopxvve1yqakqza2pdi8pe6jmowst3lv59asi7kuta7ip5pa7w13v48fx194pnv04itg9fz2ulsh9u2d561jyzquggn7twj3geejvj9d376l4om6898dfzn5qhlew0wthmbbv3ccetqi1t5itr0dz3qcyqcr5zvqk2qwfnautk2jdch284cmfmphy3mn6se7bvqftfboamk7s4j0nl278iebzedod4ahzsi4wzmsmtgdo150cmcj89ezn32oaovro6q0rik56kkczxchuwhlvyvcjpjjhbhsb0rbef9pe00yju59duw43kpq5efj2iyprkjdr8ft6qokzb6uli01p271m2f31nt4hvfqm8377v4zqk1qqao8p56cgo0admglh11tnf8issuk8v8o21ywx2ab89pvj2kl3b84zsgkw0zkr1a4zked35xe2y6k1qxkg7aamwbd3n5vm71j60gm0p8lfir3soqboihygqgjlzr6vqyh3c5vqmmu6804yf7aryunk0y2jaw3qs3dhy00rmpowgkxi5zpxlx15bkpf5mzu8ra1j3ymroyylqqmj64620mnasknxtya6k5n9nxuw8lwlfwxsk8s23m0b3yb0lnybfk66l117kra53dvggsgcb7xauyb5eaox0imrc23danue0kzdyausp9e2hfarn79fsvuxhec7wotbbw1krjipb9vjpk227lj90sqqhnccn6unanp6h1unuw8n3e1a5jiiq225v93fxw9iz3xbo9y579nfztwbccykrdy01owpmq2513mgqg61dcvij11u361x2gx3qcb958zurlj2jgbgqgp457iuh4h0xvl0qadz7piiv42kgiqlrjijb2ueju6rwrpn5jw6bxzc04ox6ym5gz8vrlfr9t28kev9l48ovc2cq5v83dgg210zvv14r377f8lo9rswf7cupa6f7scftrzq4x171r8yfirce8uidvd02zvz6rrau81mqti3lxzboocbyldiuc1msbq8aik2ccyjm7dhmfav63xbea63s2bkizae6fznxtl94p23c7k0iwnwmljqdz0vj20q51ln3e2nqxi2sz5iy5sy4n13axpj6cqz62pc386etd4bfveudu250wjlrb8werqcg00zhrifkv6u7a51k5jyreqywvxx10ic9adjsjecj9z6jtq87zoxnhwxft3z8h8e08m8f9rye2zxve470lnuqri5hljatjeyexopk8ucrvg4o97aib1q6w6u7wx9hk02xwkohzhv2k8zstskmtglahol7m9we39oh4lfbzfwoqwkf3knpupk71rglfn44oacevp17zfbd1kd2z5eqpqqpc2cxquq02s3y83tnrpdvsjl1rjsdx710s50hnod83csr4ctw1mvt8b16j7dbguyyxy8le5nonxy8zeicv4h1490lmk2mo094p6dm8kkg811quse4vihhr2ounoqrk62tbm87bkt9olsa6rg2v9m2zaufmlnusz1lezvodck4jw8yn3pcggfdflv3kp205pjxwv1k4g38ggmoggxuirw1yenv2qmpj244kewlv16g3vdj0h7kn0zib8thumk28r6ddgag26j52mef9j8zk3gzi9xm9iof9wlt0o369skr3i3foj3o6xvqb4ytv4x757lpspmvy1ic6kb5dqbyvhfs0qet4e6bpcuq7r9d0lsc28ac1750i6gqx5i5vnw7s8ncnf6dkm7txwppuzf4ca9xj61v68ia7hru6hhhfunrngwfuas6ohg81mndad1su8xajgfh3lxo5mfzkl8q9d8axa7pdps7y35bq0xavucal1icleq5r6pjyx2ssop9j2g1h529zmdao1i9mvulzi5azso8zhti64ylnofj3s07z2byxwnhcmf0pks48jkr1gus1llhukk3tjb52l7jvxh003r2895mxhcw22bh6984h92h5wzahatnbcznkogh00pogpmjoh3do9mw9nq70dhqocxl7vbzn54nomle2n8z3sgoe8fzxptjjsbvl0341nzt7w9xex0vmk3m5akqed9022xd97u3773ptq1ws416db6b9hb6iprzaqrl5y968wxj5cuhocqr251yz0cn86vlxdvms8a8maei2autnp5lcj39xxpbyuhbcw10k86yi4v9u7ccj3u9wyrx9r8n7uibc4e42iaenod0m80r1zt2ex1jdj2jo918hsbijikfsfyh1swi7qtqqr5b8vrybyk5ojxrouucur3onrsasy4kymginqlva2xf0cfjs7f4mzobqcse4faulmrsbi4iq6iucdrppvzi7enasf1b0ofpkyn8xur3072k9jpgm5woso5i4xquhj8gx3w2cwer5pf5zygnuo51wlxjrq72lbp0ji164ysir5d2tgiklt1tm95n345so6w5nzq704l264kdyr62iszr8patxdoe7svwo5d6bzbgdphjfw3ftsd4ox7ucuz0bws68i46qgz0r1ngdwzfgustm9z10sw16cwz2ch7m8baqcavz1hh0jzjmavi69tvap6o8igen4093fska2ro595s4goj7fhbu9y2pk7ppl6lgn0iymy2uc643n9afokexks66sulx4c6jao6m5ii6v8uupz8sgghrt9oeoefnfy7ez6q2ntbx3q10ntuc0wxzp4akxrsd38tagz547g5t0inn1ei4o8bt1x1udgtknm6q6rbnqkm8xpsywnrerbdugliptu20x55c5q310pbeaty6rmkqrwon7pfpxnqhd8khrorfmrbh6hznx0xezvs66z81zrnj581fnjng16obcu5inqbm7jvuw6dxl0mvpx1bvvquxbngm1eww6f98mdfkqwxexrsjcc1787fkz6jozw8k5u1m6asw5fke8016jip0lhwjnwuek1cp5v7vntmx1aqkz5ptya47ls3ryj4e1p9rqcab82tp2j1ljytbhzpum71zayzyzxe03iebyigxibq1maggcr9dtgqno31nfhvhpmq5t3d4prowjmkxs5y63zkt9x7s82ngixxc743pglr88d0dagr91jhq7byiurgfd71yjmgvb89bdy1jx5tcwsninst5iusly2bnxccs5na4rk15ksroc5ftk8zewzrepq0j84k9o9h9e2h24zbzmjka9p8h4tqavf0byrjfnvn46b5kpqyxbof6obeurocabd3rjuvvbi1mhnxwtyc0b8amfyh0ukv7iylv6fiovzqmijz5zkh4ps4ufhqiuyd7h7towxdds5r25ck01ih40zvsz6zm4fb5sf2ghmpqhurvukj9h3b9foea0hb1ll2j4lzhz9m4o50q4fr7j1ayilz4j6hnq3mizwfbtswy0hkn0bclvpvhj6duty1yqmccepy58vyfcxr44z9bluy692v8ovbfaidcavxjmdx42l8xc0x72kb3ojy0hkce3nv18g1di6nb3o69xh0pz45m9940jfi2l60phwvn3y43m8cbyewc25pja9dnn5vd54chnoeq5aixuwz5pttuxiij1o5jumuyna0eh32o3eed38l35my9p6wbq1t2wxjpfpxiwi7yqvqg4a44hp9ajpm4zamqot975hsa08f8guh95cffpl3uuxg6cvopsydva7ma5sjyw1atd4c2d68f5kfrr4ihmacjrhy4wcmm14z183y92zgmvy1np939d8gl1hgmtqaghiow9d80x1c5slu5syu3t11cy9av4zcuy67lhtxjvt8oq4m67i82fg4267jqbfyy45k5vfpedsh0sguqvd3tra5wcau1p7o9ib8c9zzgitneqsnf96hi0vhuraypfmsn9bmtqte6thj5tynd1lgajid0umk9o04ebnz042onkab7v8xse1991kb04a0v7vc2ujqnv78saygrcpqgdc6ikzs0jtkkvmdog7xpny5319k22zh0x09j99ncajr5thsc1t4mr == \d\a\j\n\5\a\6\g\f\f\m\v\q\7\z\s\o\r\7\0\e\0\s\3\g\w\0\6\4\w\y\5\c\z\e\p\o\e\9\6\7\n\9\g\q\z\3\h\e\j\x\c\h\8\t\l\t\6\u\3\g\3\y\3\y\o\g\w\z\w\l\o\p\x\v\v\e\1\y\q\a\k\q\z\a\2\p\d\i\8\p\e\6\j\m\o\w\s\t\3\l\v\5\9\a\s\i\7\k\u\t\a\7\i\p\5\p\a\7\w\1\3\v\4\8\f\x\1\9\4\p\n\v\0\4\i\t\g\9\f\z\2\u\l\s\h\9\u\2\d\5\6\1\j\y\z\q\u\g\g\n\7\t\w\j\3\g\e\e\j\v\j\9\d\3\7\6\l\4\o\m\6\8\9\8\d\f\z\n\5\q\h\l\e\w\0\w\t\h\m\b\b\v\3\c\c\e\t\q\i\1\t\5\i\t\r\0\d\z\3\q\c\y\q\c\r\5\z\v\q\k\2\q\w\f\n\a\u\t\k\2\j\d\c\h\2\8\4\c\m\f\m\p\h\y\3\m\n\6\s\e\7\b\v\q\f\t\f\b\o\a\m\k\7\s\4\j\0\n\l\2\7\8\i\e\b\z\e\d\o\d\4\a\h\z\s\i\4\w\z\m\s\m\t\g\d\o\1\5\0\c\m\c\j\8\9\e\z\n\3\2\o\a\o\v\r\o\6\q\0\r\i\k\5\6\k\k\c\z\x\c\h\u\w\h\l\v\y\v\c\j\p\j\j\h\b\h\s\b\0\r\b\e\f\9\p\e\0\0\y\j\u\5\9\d\u\w\4\3\k\p\q\5\e\f\j\2\i\y\p\r\k\j\d\r\8\f\t\6\q\o\k\z\b\6\u\l\i\0\1\p\2\7\1\m\2\f\3\1\n\t\4\h\v\f\q\m\8\3\7\7\v\4\z\q\k\1\q\q\a\o\8\p\5\6\c\g\o\0\a\d\m\g\l\h\1\1\t\n\f\8\i\s\s\u\k\8\v\8\o\2\1\y\w\x\2\a\b\8\9\p\v\j\2\k\l\3\b\8\4\z\s\g\k\w\0\z\k\r\1\a\4\z\k\e\d\3\5\x\e\2\y\6\k\1\q\x\k\g\7\a\a\m\w\b\d\3\n\5\v\m\7\1\j\6\0\g\m\0\p\8\l\f\i\r\3\s\o\q\b\o\i\h\y\g\q\g\j\l\z\r\6\v\q\y\h\3\c\5\v\q\m\m\u\6\8\0\4\y\f\7\a\r\y\u\n\k\0\y\2\j\a\w\3\q\s\3\d\h\y\0\0\r\m\p\o\w\g\k\x\i\5\z\p\x\l\x\1\5\b\k\p\f\5\m\z\u\8\r\a\1\j\3\y\m\r\o\y\y\l\q\q\m\j\6\4\6\2\0\m\n\a\s\k\n\x\t\y\a\6\k\5\n\9\n\x\u\w\8\l\w\l\f\w\x\s\k\8\s\2\3\m\0\b\3\y\b\0\l\n\y\b\f\k\6\6\l\1\1\7\k\r\a\5\3\d\v\g\g\s\g\c\b\7\x\a\u\y\b\5\e\a\o\x\0\i\m\r\c\2\3\d\a\n\u\e\0\k\z\d\y\a\u\s\p\9\e\2\h\f\a\r\n\7\9\f\s\v\u\x\h\e\c\7\w\o\t\b\b\w\1\k\r\j\i\p\b\9\v\j\p\k\2\2\7\l\j\9\0\s\q\q\h\n\c\c\n\6\u\n\a\n\p\6\h\1\u\n\u\w\8\n\3\e\1\a\5\j\i\i\q\2\2\5\v\9\3\f\x\w\9\i\z\3\x\b\o\9\y\5\7\9\n\f\z\t\w\b\c\c\y\k\r\d\y\0\1\o\w\p\m\q\2\5\1\3\m\g\q\g\6\1\d\c\v\i\j\1\1\u\3\6\1\x\2\g\x\3\q\c\b\9\5\8\z\u\r\l\j\2\j\g\b\g\q\g\p\4\5\7\i\u\h\4\h\0\x\v\l\0\q\a\d\z\7\p\i\i\v\4\2\k\g\i\q\l\r\j\i\j\b\2\u\e\j\u\6\r\w\r\p\n\5\j\w\6\b\x\z\c\0\4\o\x\6\y\m\5\g\z\8\v\r\l\f\r\9\t\2\8\k\e\v\9\l\4\8\o\v\c\2\c\q\5\v\8\3\d\g\g\2\1\0\z\v\v\1\4\r\3\7\7\f\8\l\o\9\r\s\w\f\7\c\u\p\a\6\f\7\s\c\f\t\r\z\q\4\x\1\7\1\r\8\y\f\i\r\c\e\8\u\i\d\v\d\0\2\z\v\z\6\r\r\a\u\8\1\m\q\t\i\3\l\x\z\b\o\o\c\b\y\l\d\i\u\c\1\m\s\b\q\8\a\i\k\2\c\c\y\j\m\7\d\h\m\f\a\v\6\3\x\b\e\a\6\3\s\2\b\k\i\z\a\e\6\f\z\n\x\t\l\9\4\p\2\3\c\7\k\0\i\w\n\w\m\l\j\q\d\z\0\v\j\2\0\q\5\1\l\n\3\e\2\n\q\x\i\2\s\z\5\i\y\5\s\y\4\n\1\3\a\x\p\j\6\c\q\z\6\2\p\c\3\8\6\e\t\d\4\b\f\v\e\u\d\u\2\5\0\w\j\l\r\b\8\w\e\r\q\c\g\0\0\z\h\r\i\f\k\v\6\u\7\a\5\1\k\5\j\y\r\e\q\y\w\v\x\x\1\0\i\c\9\a\d\j\s\j\e\c\j\9\z\6\j\t\q\8\7\z\o\x\n\h\w\x\f\t\3\z\8\h\8\e\0\8\m\8\f\9\r\y\e\2\z\x\v\e\4\7\0\l\n\u\q\r\i\5\h\l\j\a\t\j\e\y\e\x\o\p\k\8\u\c\r\v\g\4\o\9\7\a\i\b\1\q\6\w\6\u\7\w\x\9\h\k\0\2\x\w\k\o\h\z\h\v\2\k\8\z\s\t\s\k\m\t\g\l\a\h\o\l\7\m\9\w\e\3\9\o\h\4\l\f\b\z\f\w\o\q\w\k\f\3\k\n\p\u\p\k\7\1\r\g\l\f\n\4\4\o\a\c\e\v\p\1\7\z\f\b\d\1\k\d\2\z\5\e\q\p\q\q\p\c\2\c\x\q\u\q\0\2\s\3\y\8\3\t\n\r\p\d\v\s\j\l\1\r\j\s\d\x\7\1\0\s\5\0\h\n\o\d\8\3\c\s\r\4\c\t\w\1\m\v\t\8\b\1\6\j\7\d\b\g\u\y\y\x\y\8\l\e\5\n\o\n\x\y\8\z\e\i\c\v\4\h\1\4\9\0\l\m\k\2\m\o\0\9\4\p\6\d\m\8\k\k\g\8\1\1\q\u\s\e\4\v\i\h\h\r\2\o\u\n\o\q\r\k\6\2\t\b\m\8\7\b\k\t\9\o\l\s\a\6\r\g\2\v\9\m\2\z\a\u\f\m\l\n\u\s\z\1\l\e\z\v\o\d\c\k\4\j\w\8\y\n\3\p\c\g\g\f\d\f\l\v\3\k\p\2\0\5\p\j\x\w\v\1\k\4\g\3\8\g\g\m\o\g\g\x\u\i\r\w\1\y\e\n\v\2\q\m\p\j\2\4\4\k\e\w\l\v\1\6\g\3\v\d\j\0\h\7\k\n\0\z\i\b\8\t\h\u\m\k\2\8\r\6\d\d\g\a\g\2\6\j\5\2\m\e\f\9\j\8\z\k\3\g\z\i\9\x\m\9\i\o\f\9\w\l\t\0\o\3\6\9\s\k\r\3\i\3\f\o\j\3\o\6\x\v\q\b\4\y\t\v\4\x\7\5\7\l\p\s\p\m\v\y\1\i\c\6\k\b\5\d\q\b\y\v\h\f\s\0\q\e\t\4\e\6\b\p\c\u\q\7\r\9\d\0\l\s\c\2\8\a\c\1\7\5\0\i\6\g\q\x\5\i\5\v\n\w\7\s\8\n\c\n\f\6\d\k\m\7\t\x\w\p\p\u\z\f\4\c\a\9\x\j\6\1\v\6\8\i\a\7\h\r\u\6\h\h\h\f\u\n\r\n\g\w\f\u\a\s\6\o\h\g\8\1\m\n\d\a\d\1\s\u\8\x\a\j\g\f\h\3\l\x\o\5\m\f\z\k\l\8\q\9\d\8\a\x\a\7\p\d\p\s\7\y\3\5\b\q\0\x\a\v\u\c\a\l\1\i\c\l\e\q\5\r\6\p\j\y\x\2\s\s\o\p\9\j\2\g\1\h\5\2\9\z\m\d\a\o\1\i\9\m\v\u\l\z\i\5\a\z\s\o\8\z\h\t\i\6\4\y\l\n\o\f\j\3\s\0\7\z\2\b\y\x\w\n\h\c\m\f\0\p\k\s\4\8\j\k\r\1\g\u\s\1\l\l\h\u\k\k\3\t\j\b\5\2\l\7\j\v\x\h\0\0\3\r\2\8\9\5\m\x\h\c\w\2\2\b\h\6\9\8\4\h\9\2\h\5\w\z\a\h\a\t\n\b\c\z\n\k\o\g\h\0\0\p\o\g\p\m\j\o\h\3\d\o\9\m\w\9\n\q\7\0\d\h\q\o\c\x\l\7\v\b\z\n\5\4\n\o\m\l\e\2\n\8\z\3\s\g\o\e\8\f\z\x\p\t\j\j\s\b\v\l\0\3\4\1\n\z\t\7\w\9\x\e\x\0\v\m\k\3\m\5\a\k\q\e\d\9\0\2\2\x\d\9\7\u\3\7\7\3\p\t\q\1\w\s\4\1\6\d\b\6\b\9\h\b\6\i\p\r\z\a\q\r\l\5\y\9\6\8\w\x\j\5\c\u\h\o\c\q\r\2\5\1\y\z\0\c\n\8\6\v\l\x\d\v\m\s\8\a\8\m\a\e\i\2\a\u\t\n\p\5\l\c\j\3\9\x\x\p\b\y\u\h\b\c\w\1\0\k\8\6\y\i\4\v\9\u\7\c\c\j\3\u\9\w\y\r\x\9\r\8\n\7\u\i\b\c\4\e\4\2\i\a\e\n\o\d\0\m\8\0\r\1\z\t\2\e\x\1\j\d\j\2\j\o\9\1\8\h\s\b\i\j\i\k\f\s\f\y\h\1\s\w\i\7\q\t\q\q\r\5\b\8\v\r\y\b\y\k\5\o\j\x\r\o\u\u\c\u\r\3\o\n\r\s\a\s\y\4\k\y\m\g\i\n\q\l\v\a\2\x\f\0\c\f\j\s\7\f\4\m\z\o\b\q\c\s\e\4\f\a\u\l\m\r\s\b\i\4\i\q\6\i\u\c\d\r\p\p\v\z\i\7\e\n\a\s\f\1\b\0\o\f\p\k\y\n\8\x\u\r\3\0\7\2\k\9\j\p\g\m\5\w\o\s\o\5\i\4\x\q\u\h\j\8\g\x\3\w\2\c\w\e\r\5\p\f\5\z\y\g\n\u\o\5\1\w\l\x\j\r\q\7\2\l\b\p\0\j\i\1\6\4\y\s\i\r\5\d\2\t\g\i\k\l\t\1\t\m\9\5\n\3\4\5\s\o\6\w\5\n\z\q\7\0\4\l\2\6\4\k\d\y\r\6\2\i\s\z\r\8\p\a\t\x\d\o\e\7\s\v\w\o\5\d\6\b\z\b\g\d\p\h\j\f\w\3\f\t\s\d\4\o\x\7\u\c\u\z\0\b\w\s\6\8\i\4\6\q\g\z\0\r\1\n\g\d\w\z\f\g\u\s\t\m\9\z\1\0\s\w\1\6\c\w\z\2\c\h\7\m\8\b\a\q\c\a\v\z\1\h\h\0\j\z\j\m\a\v\i\6\9\t\v\a\p\6\o\8\i\g\e\n\4\0\9\3\f\s\k\a\2\r\o\5\9\5\s\4\g\o\j\7\f\h\b\u\9\y\2\p\k\7\p\p\l\6\l\g\n\0\i\y\m\y\2\u\c\6\4\3\n\9\a\f\o\k\e\x\k\s\6\6\s\u\l\x\4\c\6\j\a\o\6\m\5\i\i\6\v\8\u\u\p\z\8\s\g\g\h\r\t\9\o\e\o\e\f\n\f\y\7\e\z\6\q\2\n\t\b\x\3\q\1\0\n\t\u\c\0\w\x\z\p\4\a\k\x\r\s\d\3\8\t\a\g\z\5\4\7\g\5\t\0\i\n\n\1\e\i\4\o\8\b\t\1\x\1\u\d\g\t\k\n\m\6\q\6\r\b\n\q\k\m\8\x\p\s\y\w\n\r\e\r\b\d\u\g\l\i\p\t\u\2\0\x\5\5\c\5\q\3\1\0\p\b\e\a\t\y\6\r\m\k\q\r\w\o\n\7\p\f\p\x\n\q\h\d\8\k\h\r\o\r\f\m\r\b\h\6\h\z\n\x\0\x\e\z\v\s\6\6\z\8\1\z\r\n\j\5\8\1\f\n\j\n\g\1\6\o\b\c\u\5\i\n\q\b\m\7\j\v\u\w\6\d\x\l\0\m\v\p\x\1\b\v\v\q\u\x\b\n\g\m\1\e\w\w\6\f\9\8\m\d\f\k\q\w\x\e\x\r\s\j\c\c\1\7\8\7\f\k\z\6\j\o\z\w\8\k\5\u\1\m\6\a\s\w\5\f\k\e\8\0\1\6\j\i\p\0\l\h\w\j\n\w\u\e\k\1\c\p\5\v\7\v\n\t\m\x\1\a\q\k\z\5\p\t\y\a\4\7\l\s\3\r\y\j\4\e\1\p\9\r\q\c\a\b\8\2\t\p\2\j\1\l\j\y\t\b\h\z\p\u\m\7\1\z\a\y\z\y\z\x\e\0\3\i\e\b\y\i\g\x\i\b\q\1\m\a\g\g\c\r\9\d\t\g\q\n\o\3\1\n\f\h\v\h\p\m\q\5\t\3\d\4\p\r\o\w\j\m\k\x\s\5\y\6\3\z\k\t\9\x\7\s\8\2\n\g\i\x\x\c\7\4\3\p\g\l\r\8\8\d\0\d\a\g\r\9\1\j\h\q\7\b\y\i\u\r\g\f\d\7\1\y\j\m\g\v\b\8\9\b\d\y\1\j\x\5\t\c\w\s\n\i\n\s\t\5\i\u\s\l\y\2\b\n\x\c\c\s\5\n\a\4\r\k\1\5\k\s\r\o\c\5\f\t\k\8\z\e\w\z\r\e\p\q\0\j\8\4\k\9\o\9\h\9\e\2\h\2\4\z\b\z\m\j\k\a\9\p\8\h\4\t\q\a\v\f\0\b\y\r\j\f\n\v\n\4\6\b\5\k\p\q\y\x\b\o\f\6\o\b\e\u\r\o\c\a\b\d\3\r\j\u\v\v\b\i\1\m\h\n\x\w\t\y\c\0\b\8\a\m\f\y\h\0\u\k\v\7\i\y\l\v\6\f\i\o\v\z\q\m\i\j\z\5\z\k\h\4\p\s\4\u\f\h\q\i\u\y\d\7\h\7\t\o\w\x\d\d\s\5\r\2\5\c\k\0\1\i\h\4\0\z\v\s\z\6\z\m\4\f\b\5\s\f\2\g\h\m\p\q\h\u\r\v\u\k\j\9\h\3\b\9\f\o\e\a\0\h\b\1\l\l\2\j\4\l\z\h\z\9\m\4\o\5\0\q\4\f\r\7\j\1\a\y\i\l\z\4\j\6\h\n\q\3\m\i\z\w\f\b\t\s\w\y\0\h\k\n\0\b\c\l\v\p\v\h\j\6\d\u\t\y\1\y\q\m\c\c\e\p\y\5\8\v\y\f\c\x\r\4\4\z\9\b\l\u\y\6\9\2\v\8\o\v\b\f\a\i\d\c\a\v\x\j\m\d\x\4\2\l\8\x\c\0\x\7\2\k\b\3\o\j\y\0\h\k\c\e\3\n\v\1\8\g\1\d\i\6\n\b\3\o\6\9\x\h\0\p\z\4\5\m\9\9\4\0\j\f\i\2\l\6\0\p\h\w\v\n\3\y\4\3\m\8\c\b\y\e\w\c\2\5\p\j\a\9\d\n\n\5\v\d\5\4\c\h\n\o\e\q\5\a\i\x\u\w\z\5\p\t\t\u\x\i\i\j\1\o\5\j\u\m\u\y\n\a\0\e\h\3\2\o\3\e\e\d\3\8\l\3\5\m\y\9\p\6\w\b\q\1\t\2\w\x\j\p\f\p\x\i\w\i\7\y\q\v\q\g\4\a\4\4\h\p\9\a\j\p\m\4\z\a\m\q\o\t\9\7\5\h\s\a\0\8\f\8\g\u\h\9\5\c\f\f\p\l\3\u\u\x\g\6\c\v\o\p\s\y\d\v\a\7\m\a\5\s\j\y\w\1\a\t\d\4\c\2\d\6\8\f\5\k\f\r\r\4\i\h\m\a\c\j\r\h\y\4\w\c\m\m\1\4\z\1\8\3\y\9\2\z\g\m\v\y\1\n\p\9\3\9\d\8\g\l\1\h\g\m\t\q\a\g\h\i\o\w\9\d\8\0\x\1\c\5\s\l\u\5\s\y\u\3\t\1\1\c\y\9\a\v\4\z\c\u\y\6\7\l\h\t\x\j\v\t\8\o\q\4\m\6\7\i\8\2\f\g\4\2\6\7\j\q\b\f\y\y\4\5\k\5\v\f\p\e\d\s\h\0\s\g\u\q\v\d\3\t\r\a\5\w\c\a\u\1\p\7\o\9\i\b\8\c\9\z\z\g\i\t\n\e\q\s\n\f\9\6\h\i\0\v\h\u\r\a\y\p\f\m\s\n\9\b\m\t\q\t\e\6\t\h\j\5\t\y\n\d\1\l\g\a\j\i\d\0\u\m\k\9\o\0\4\e\b\n\z\0\4\2\o\n\k\a\b\7\v\8\x\s\e\1\9\9\1\k\b\0\4\a\0\v\7\v\c\2\u\j\q\n\v\7\8\s\a\y\g\r\c\p\q\g\d\c\6\i\k\z\s\0\j\t\k\k\v\m\d\o\g\7\x\p\n\y\5\3\1\9\k\2\2\z\h\0\x\0\9\j\9\9\n\c\a\j\r\5\t\h\s\c\1\t\4\m\r ]] 00:10:39.855 00:10:39.855 real 0m1.139s 00:10:39.855 user 0m0.762s 00:10:39.855 sys 0m0.507s 00:10:39.855 09:21:17 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:39.855 ************************************ 00:10:39.855 END TEST dd_rw_offset 00:10:39.855 ************************************ 00:10:39.855 09:21:17 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:10:39.855 09:21:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:10:39.855 09:21:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:10:39.855 09:21:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:10:39.855 09:21:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:10:39.855 09:21:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:10:39.855 09:21:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:10:39.855 09:21:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:10:39.855 09:21:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:10:39.855 09:21:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:10:39.855 09:21:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:39.855 09:21:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:10:39.855 [2024-12-09 09:21:17.552268] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:39.855 [2024-12-09 09:21:17.552445] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60010 ] 00:10:39.855 { 00:10:39.855 "subsystems": [ 00:10:39.855 { 00:10:39.855 "subsystem": "bdev", 00:10:39.855 "config": [ 00:10:39.855 { 00:10:39.855 "params": { 00:10:39.855 "trtype": "pcie", 00:10:39.855 "traddr": "0000:00:10.0", 00:10:39.855 "name": "Nvme0" 00:10:39.855 }, 00:10:39.856 "method": "bdev_nvme_attach_controller" 00:10:39.856 }, 00:10:39.856 { 00:10:39.856 "method": "bdev_wait_for_examine" 00:10:39.856 } 00:10:39.856 ] 00:10:39.856 } 00:10:39.856 ] 00:10:39.856 } 00:10:40.114 [2024-12-09 09:21:17.695359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.114 [2024-12-09 09:21:17.739815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.114 [2024-12-09 09:21:17.780945] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:40.372  [2024-12-09T09:21:18.095Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:10:40.372 00:10:40.372 09:21:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:40.372 ************************************ 00:10:40.372 END TEST spdk_dd_basic_rw 00:10:40.372 ************************************ 00:10:40.372 00:10:40.372 real 0m15.837s 00:10:40.372 user 0m10.839s 00:10:40.372 sys 0m6.170s 00:10:40.372 09:21:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:40.372 09:21:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:10:40.630 09:21:18 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:10:40.630 09:21:18 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:40.630 09:21:18 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:40.630 09:21:18 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:10:40.630 ************************************ 00:10:40.630 START TEST spdk_dd_posix 00:10:40.630 ************************************ 00:10:40.630 09:21:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:10:40.630 * Looking for test storage... 00:10:40.630 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:10:40.630 09:21:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:40.630 09:21:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:40.630 09:21:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # lcov --version 00:10:40.630 09:21:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:40.630 09:21:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:40.630 09:21:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:40.630 09:21:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:40.630 09:21:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:10:40.630 09:21:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:10:40.630 09:21:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:10:40.630 09:21:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:10:40.630 09:21:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:10:40.630 09:21:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:10:40.630 09:21:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:10:40.630 09:21:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:40.630 09:21:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:10:40.630 09:21:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:10:40.630 09:21:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:40.630 09:21:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:40.630 09:21:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:10:40.630 09:21:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:10:40.630 09:21:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:40.630 09:21:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:10:40.630 09:21:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:10:40.630 09:21:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:10:40.630 09:21:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:10:40.630 09:21:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:40.630 09:21:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:10:40.630 09:21:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:10:40.630 09:21:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:40.630 09:21:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:40.630 09:21:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:10:40.630 09:21:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:40.630 09:21:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:40.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.630 --rc genhtml_branch_coverage=1 00:10:40.630 --rc genhtml_function_coverage=1 00:10:40.630 --rc genhtml_legend=1 00:10:40.630 --rc geninfo_all_blocks=1 00:10:40.630 --rc geninfo_unexecuted_blocks=1 00:10:40.630 00:10:40.630 ' 00:10:40.630 09:21:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:40.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.630 --rc genhtml_branch_coverage=1 00:10:40.630 --rc genhtml_function_coverage=1 00:10:40.630 --rc genhtml_legend=1 00:10:40.630 --rc geninfo_all_blocks=1 00:10:40.630 --rc geninfo_unexecuted_blocks=1 00:10:40.630 00:10:40.630 ' 00:10:40.630 09:21:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:40.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.630 --rc genhtml_branch_coverage=1 00:10:40.630 --rc genhtml_function_coverage=1 00:10:40.630 --rc genhtml_legend=1 00:10:40.630 --rc geninfo_all_blocks=1 00:10:40.630 --rc geninfo_unexecuted_blocks=1 00:10:40.630 00:10:40.630 ' 00:10:40.630 09:21:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:40.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.630 --rc genhtml_branch_coverage=1 00:10:40.630 --rc genhtml_function_coverage=1 00:10:40.630 --rc genhtml_legend=1 00:10:40.630 --rc geninfo_all_blocks=1 00:10:40.630 --rc geninfo_unexecuted_blocks=1 00:10:40.630 00:10:40.630 ' 00:10:40.630 09:21:18 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:40.630 09:21:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:10:40.630 09:21:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:40.630 09:21:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:40.630 09:21:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:40.630 09:21:18 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.631 09:21:18 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.631 09:21:18 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.631 09:21:18 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:10:40.631 09:21:18 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.631 09:21:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:10:40.631 09:21:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:10:40.631 09:21:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:10:40.631 09:21:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:10:40.631 09:21:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:40.631 09:21:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:40.631 09:21:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:10:40.631 09:21:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:10:40.631 * First test run, liburing in use 00:10:40.631 09:21:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:10:40.631 09:21:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:40.631 09:21:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:40.631 09:21:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:40.889 ************************************ 00:10:40.889 START TEST dd_flag_append 00:10:40.889 ************************************ 00:10:40.889 09:21:18 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1129 -- # append 00:10:40.889 09:21:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:10:40.889 09:21:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:10:40.889 09:21:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:10:40.889 09:21:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:10:40.889 09:21:18 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:10:40.889 09:21:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=ageeyrbogvveqf60kjeqpf0j890jhmwx 00:10:40.889 09:21:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:10:40.889 09:21:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:10:40.889 09:21:18 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:10:40.889 09:21:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=j703rz6dszwhgr6f2tua0uvdlobgijw9 00:10:40.889 09:21:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s ageeyrbogvveqf60kjeqpf0j890jhmwx 00:10:40.889 09:21:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s j703rz6dszwhgr6f2tua0uvdlobgijw9 00:10:40.889 09:21:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:10:40.889 [2024-12-09 09:21:18.426123] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:40.889 [2024-12-09 09:21:18.426204] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60082 ] 00:10:40.889 [2024-12-09 09:21:18.575589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.147 [2024-12-09 09:21:18.627528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.147 [2024-12-09 09:21:18.668230] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:41.147  [2024-12-09T09:21:18.870Z] Copying: 32/32 [B] (average 31 kBps) 00:10:41.147 00:10:41.147 ************************************ 00:10:41.147 END TEST dd_flag_append 00:10:41.147 ************************************ 00:10:41.147 09:21:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ j703rz6dszwhgr6f2tua0uvdlobgijw9ageeyrbogvveqf60kjeqpf0j890jhmwx == \j\7\0\3\r\z\6\d\s\z\w\h\g\r\6\f\2\t\u\a\0\u\v\d\l\o\b\g\i\j\w\9\a\g\e\e\y\r\b\o\g\v\v\e\q\f\6\0\k\j\e\q\p\f\0\j\8\9\0\j\h\m\w\x ]] 00:10:41.147 00:10:41.147 real 0m0.486s 00:10:41.147 user 0m0.247s 00:10:41.147 sys 0m0.236s 00:10:41.147 09:21:18 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:41.147 09:21:18 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:10:41.406 09:21:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:10:41.406 09:21:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:41.406 09:21:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:41.406 09:21:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:41.406 ************************************ 00:10:41.406 START TEST dd_flag_directory 00:10:41.406 ************************************ 00:10:41.406 09:21:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1129 -- # directory 00:10:41.406 09:21:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:41.406 09:21:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:10:41.406 09:21:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:41.406 09:21:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:41.406 09:21:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:41.406 09:21:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:41.406 09:21:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:41.406 09:21:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:41.406 09:21:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:41.406 09:21:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:41.406 09:21:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:41.406 09:21:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:41.406 [2024-12-09 09:21:18.990974] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:41.406 [2024-12-09 09:21:18.991052] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60111 ] 00:10:41.664 [2024-12-09 09:21:19.143134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.664 [2024-12-09 09:21:19.193395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.664 [2024-12-09 09:21:19.235391] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:41.664 [2024-12-09 09:21:19.264798] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:10:41.664 [2024-12-09 09:21:19.265054] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:10:41.664 [2024-12-09 09:21:19.265072] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:41.664 [2024-12-09 09:21:19.360379] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:10:41.923 09:21:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:10:41.923 09:21:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:41.923 09:21:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:10:41.923 09:21:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:10:41.923 09:21:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:10:41.923 09:21:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:41.923 09:21:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:10:41.923 09:21:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:10:41.923 09:21:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:10:41.923 09:21:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:41.923 09:21:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:41.923 09:21:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:41.923 09:21:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:41.923 09:21:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:41.923 09:21:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:41.923 09:21:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:41.923 09:21:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:41.923 09:21:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:10:41.923 [2024-12-09 09:21:19.477423] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:41.923 [2024-12-09 09:21:19.477657] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60120 ] 00:10:41.923 [2024-12-09 09:21:19.630347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.207 [2024-12-09 09:21:19.678968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.207 [2024-12-09 09:21:19.720793] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:42.207 [2024-12-09 09:21:19.750069] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:10:42.207 [2024-12-09 09:21:19.750120] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:10:42.207 [2024-12-09 09:21:19.750139] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:42.207 [2024-12-09 09:21:19.846483] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:10:42.207 09:21:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:10:42.207 09:21:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:42.207 09:21:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:10:42.207 ************************************ 00:10:42.207 END TEST dd_flag_directory 00:10:42.207 ************************************ 00:10:42.207 09:21:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:10:42.207 09:21:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:10:42.207 09:21:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:42.207 00:10:42.207 real 0m0.985s 00:10:42.207 user 0m0.527s 00:10:42.207 sys 0m0.247s 00:10:42.207 09:21:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:42.207 09:21:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:10:42.465 09:21:19 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:10:42.465 09:21:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:42.465 09:21:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:42.465 09:21:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:42.465 ************************************ 00:10:42.465 START TEST dd_flag_nofollow 00:10:42.465 ************************************ 00:10:42.465 09:21:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1129 -- # nofollow 00:10:42.465 09:21:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:10:42.465 09:21:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:10:42.465 09:21:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:10:42.465 09:21:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:10:42.465 09:21:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:42.465 09:21:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:10:42.465 09:21:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:42.465 09:21:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:42.465 09:21:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:42.465 09:21:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:42.465 09:21:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:42.465 09:21:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:42.465 09:21:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:42.465 09:21:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:42.465 09:21:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:42.465 09:21:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:42.465 [2024-12-09 09:21:20.047802] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:42.465 [2024-12-09 09:21:20.048035] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60149 ] 00:10:42.724 [2024-12-09 09:21:20.198655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.724 [2024-12-09 09:21:20.247697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.724 [2024-12-09 09:21:20.288263] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:42.724 [2024-12-09 09:21:20.317369] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:10:42.724 [2024-12-09 09:21:20.317418] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:10:42.724 [2024-12-09 09:21:20.317432] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:42.724 [2024-12-09 09:21:20.411626] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:10:42.994 09:21:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:10:42.994 09:21:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:42.994 09:21:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:10:42.994 09:21:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:10:42.994 09:21:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:10:42.994 09:21:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:42.994 09:21:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:10:42.994 09:21:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:10:42.994 09:21:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:10:42.994 09:21:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:42.994 09:21:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:42.994 09:21:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:42.994 09:21:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:42.995 09:21:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:42.995 09:21:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:42.995 09:21:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:42.995 09:21:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:42.995 09:21:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:10:42.995 [2024-12-09 09:21:20.527058] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:42.995 [2024-12-09 09:21:20.527131] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60157 ] 00:10:42.995 [2024-12-09 09:21:20.677010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.264 [2024-12-09 09:21:20.724303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.264 [2024-12-09 09:21:20.765322] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:43.264 [2024-12-09 09:21:20.794144] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:10:43.264 [2024-12-09 09:21:20.794194] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:10:43.264 [2024-12-09 09:21:20.794209] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:43.264 [2024-12-09 09:21:20.888507] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:10:43.264 09:21:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:10:43.264 09:21:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:43.264 09:21:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:10:43.264 09:21:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:10:43.264 09:21:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:10:43.264 09:21:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:43.264 09:21:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:10:43.264 09:21:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:10:43.264 09:21:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:10:43.264 09:21:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:43.521 [2024-12-09 09:21:20.995298] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:43.521 [2024-12-09 09:21:20.995550] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60167 ] 00:10:43.521 [2024-12-09 09:21:21.144576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.521 [2024-12-09 09:21:21.193052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.521 [2024-12-09 09:21:21.233547] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:43.778  [2024-12-09T09:21:21.501Z] Copying: 512/512 [B] (average 500 kBps) 00:10:43.778 00:10:43.778 ************************************ 00:10:43.778 END TEST dd_flag_nofollow 00:10:43.778 ************************************ 00:10:43.778 09:21:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ 5bko6lg0kfaat4ax50mwajzvqn8rgs5ltnonxwdngp5en1mwyjfifnuttvnsqetx42vtl4gai7bpxpn0uo4vc9lurr5by2rxl6aitdtqpnh1rzca3pg4oad81ldo0ywjzv141qe0skw6qhv885fh467l7amcmlievmmi6ewjob4zusgzgjej9rip8gedws5uppdb6ew4gtw0bxfg9nl9by3nwdyjfn43zllyagz7ec90cfxkxrspeg2z09a1h7hyllodt45khudnjim4ppdlyysx8p7w3jdpmr8jxd5r09redp5tjmwixt7vka9rnd6kx7x34afsaykx19ajfiox0w32xlztimvd11zvsjap6gx06dlptmewvjmcg9n1v033zyezq1ov3z3dnc8g9xl7j10ec2b7zojxb42576drcg1dkz1429b5jjnj9vdxfvq0o2tcrbzynepmsjz2xsf6de99boccmgpzdu6r8pooo3lp9emhptroqlv7rznylpzc == \5\b\k\o\6\l\g\0\k\f\a\a\t\4\a\x\5\0\m\w\a\j\z\v\q\n\8\r\g\s\5\l\t\n\o\n\x\w\d\n\g\p\5\e\n\1\m\w\y\j\f\i\f\n\u\t\t\v\n\s\q\e\t\x\4\2\v\t\l\4\g\a\i\7\b\p\x\p\n\0\u\o\4\v\c\9\l\u\r\r\5\b\y\2\r\x\l\6\a\i\t\d\t\q\p\n\h\1\r\z\c\a\3\p\g\4\o\a\d\8\1\l\d\o\0\y\w\j\z\v\1\4\1\q\e\0\s\k\w\6\q\h\v\8\8\5\f\h\4\6\7\l\7\a\m\c\m\l\i\e\v\m\m\i\6\e\w\j\o\b\4\z\u\s\g\z\g\j\e\j\9\r\i\p\8\g\e\d\w\s\5\u\p\p\d\b\6\e\w\4\g\t\w\0\b\x\f\g\9\n\l\9\b\y\3\n\w\d\y\j\f\n\4\3\z\l\l\y\a\g\z\7\e\c\9\0\c\f\x\k\x\r\s\p\e\g\2\z\0\9\a\1\h\7\h\y\l\l\o\d\t\4\5\k\h\u\d\n\j\i\m\4\p\p\d\l\y\y\s\x\8\p\7\w\3\j\d\p\m\r\8\j\x\d\5\r\0\9\r\e\d\p\5\t\j\m\w\i\x\t\7\v\k\a\9\r\n\d\6\k\x\7\x\3\4\a\f\s\a\y\k\x\1\9\a\j\f\i\o\x\0\w\3\2\x\l\z\t\i\m\v\d\1\1\z\v\s\j\a\p\6\g\x\0\6\d\l\p\t\m\e\w\v\j\m\c\g\9\n\1\v\0\3\3\z\y\e\z\q\1\o\v\3\z\3\d\n\c\8\g\9\x\l\7\j\1\0\e\c\2\b\7\z\o\j\x\b\4\2\5\7\6\d\r\c\g\1\d\k\z\1\4\2\9\b\5\j\j\n\j\9\v\d\x\f\v\q\0\o\2\t\c\r\b\z\y\n\e\p\m\s\j\z\2\x\s\f\6\d\e\9\9\b\o\c\c\m\g\p\z\d\u\6\r\8\p\o\o\o\3\l\p\9\e\m\h\p\t\r\o\q\l\v\7\r\z\n\y\l\p\z\c ]] 00:10:43.778 00:10:43.778 real 0m1.439s 00:10:43.778 user 0m0.751s 00:10:43.778 sys 0m0.476s 00:10:43.778 09:21:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.778 09:21:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:10:43.778 09:21:21 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:10:43.778 09:21:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:43.778 09:21:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.778 09:21:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:43.778 ************************************ 00:10:43.778 START TEST dd_flag_noatime 00:10:43.778 ************************************ 00:10:43.778 09:21:21 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1129 -- # noatime 00:10:43.778 09:21:21 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:10:43.778 09:21:21 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:10:43.778 09:21:21 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:10:43.778 09:21:21 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:10:43.778 09:21:21 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:10:43.778 09:21:21 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:44.037 09:21:21 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1733736081 00:10:44.037 09:21:21 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:44.037 09:21:21 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1733736081 00:10:44.037 09:21:21 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:10:44.971 09:21:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:44.971 [2024-12-09 09:21:22.566404] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:44.971 [2024-12-09 09:21:22.566495] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60204 ] 00:10:45.228 [2024-12-09 09:21:22.716295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.228 [2024-12-09 09:21:22.762011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.228 [2024-12-09 09:21:22.802483] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:45.228  [2024-12-09T09:21:23.221Z] Copying: 512/512 [B] (average 500 kBps) 00:10:45.498 00:10:45.498 09:21:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:45.498 09:21:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1733736081 )) 00:10:45.498 09:21:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:45.498 09:21:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1733736081 )) 00:10:45.498 09:21:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:45.498 [2024-12-09 09:21:23.044662] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:45.498 [2024-12-09 09:21:23.044899] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60223 ] 00:10:45.498 [2024-12-09 09:21:23.193039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.754 [2024-12-09 09:21:23.241033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.754 [2024-12-09 09:21:23.281450] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:45.754  [2024-12-09T09:21:23.477Z] Copying: 512/512 [B] (average 500 kBps) 00:10:45.754 00:10:45.754 09:21:23 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:45.754 09:21:23 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1733736083 )) 00:10:45.754 00:10:45.754 real 0m1.987s 00:10:45.754 user 0m0.517s 00:10:45.754 sys 0m0.480s 00:10:45.754 09:21:23 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:45.754 09:21:23 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:10:45.754 ************************************ 00:10:45.754 END TEST dd_flag_noatime 00:10:45.754 ************************************ 00:10:46.011 09:21:23 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:10:46.012 09:21:23 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:46.012 09:21:23 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:46.012 09:21:23 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:46.012 ************************************ 00:10:46.012 START TEST dd_flags_misc 00:10:46.012 ************************************ 00:10:46.012 09:21:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1129 -- # io 00:10:46.012 09:21:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:10:46.012 09:21:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:10:46.012 09:21:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:10:46.012 09:21:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:10:46.012 09:21:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:10:46.012 09:21:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:10:46.012 09:21:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:10:46.012 09:21:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:46.012 09:21:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:10:46.012 [2024-12-09 09:21:23.589174] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:46.012 [2024-12-09 09:21:23.589418] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60246 ] 00:10:46.269 [2024-12-09 09:21:23.752098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.269 [2024-12-09 09:21:23.801121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.269 [2024-12-09 09:21:23.841768] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:46.269  [2024-12-09T09:21:24.251Z] Copying: 512/512 [B] (average 500 kBps) 00:10:46.528 00:10:46.528 09:21:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ xwedn7dx78pimb0fl8u5utwoqoasuv8mqvgx1e8vqp1fqy4suyf649nvqavcsyph7s6lz9yt8p0a6gia1h4k8j1vhc4h0cuwdouybjz94dtu1epl11biykzr3jo2gijs0tref9uq11lf459q5gvid5piezv6q0rkimi2ipzksnqxcrkbyvhq2c7exdtp26y84bxnjd1eod3qx86g9wke3rzda1jrm4semws9fxsjjd0pbuz8eo4nckmwmoozsikbi3kw5xm3alu23kqwieyf4k4cjgx8uk4bp8rjbzxdbmp7c6fk8hi5j7w41gy4arg68sf78q3zjkwlg3ajz4xyllh0yo3iqzc7g8dtiyzvdk0qplhd48a85qm5xt7uewa6ueny49rlgwhircdo5lia4g37tkqncqpehmbeq80aib53ifdzr19bixzqnnavnqdlbhitbfqlju0lb8nx1xgo55c3okpj35dyrsfjzh3nxnst2o7ihup5rkcu3m3tll9j == \x\w\e\d\n\7\d\x\7\8\p\i\m\b\0\f\l\8\u\5\u\t\w\o\q\o\a\s\u\v\8\m\q\v\g\x\1\e\8\v\q\p\1\f\q\y\4\s\u\y\f\6\4\9\n\v\q\a\v\c\s\y\p\h\7\s\6\l\z\9\y\t\8\p\0\a\6\g\i\a\1\h\4\k\8\j\1\v\h\c\4\h\0\c\u\w\d\o\u\y\b\j\z\9\4\d\t\u\1\e\p\l\1\1\b\i\y\k\z\r\3\j\o\2\g\i\j\s\0\t\r\e\f\9\u\q\1\1\l\f\4\5\9\q\5\g\v\i\d\5\p\i\e\z\v\6\q\0\r\k\i\m\i\2\i\p\z\k\s\n\q\x\c\r\k\b\y\v\h\q\2\c\7\e\x\d\t\p\2\6\y\8\4\b\x\n\j\d\1\e\o\d\3\q\x\8\6\g\9\w\k\e\3\r\z\d\a\1\j\r\m\4\s\e\m\w\s\9\f\x\s\j\j\d\0\p\b\u\z\8\e\o\4\n\c\k\m\w\m\o\o\z\s\i\k\b\i\3\k\w\5\x\m\3\a\l\u\2\3\k\q\w\i\e\y\f\4\k\4\c\j\g\x\8\u\k\4\b\p\8\r\j\b\z\x\d\b\m\p\7\c\6\f\k\8\h\i\5\j\7\w\4\1\g\y\4\a\r\g\6\8\s\f\7\8\q\3\z\j\k\w\l\g\3\a\j\z\4\x\y\l\l\h\0\y\o\3\i\q\z\c\7\g\8\d\t\i\y\z\v\d\k\0\q\p\l\h\d\4\8\a\8\5\q\m\5\x\t\7\u\e\w\a\6\u\e\n\y\4\9\r\l\g\w\h\i\r\c\d\o\5\l\i\a\4\g\3\7\t\k\q\n\c\q\p\e\h\m\b\e\q\8\0\a\i\b\5\3\i\f\d\z\r\1\9\b\i\x\z\q\n\n\a\v\n\q\d\l\b\h\i\t\b\f\q\l\j\u\0\l\b\8\n\x\1\x\g\o\5\5\c\3\o\k\p\j\3\5\d\y\r\s\f\j\z\h\3\n\x\n\s\t\2\o\7\i\h\u\p\5\r\k\c\u\3\m\3\t\l\l\9\j ]] 00:10:46.528 09:21:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:46.528 09:21:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:10:46.528 [2024-12-09 09:21:24.070147] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:46.528 [2024-12-09 09:21:24.070425] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60261 ] 00:10:46.528 [2024-12-09 09:21:24.219011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.787 [2024-12-09 09:21:24.268654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.787 [2024-12-09 09:21:24.309111] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:46.787  [2024-12-09T09:21:24.510Z] Copying: 512/512 [B] (average 500 kBps) 00:10:46.787 00:10:46.788 09:21:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ xwedn7dx78pimb0fl8u5utwoqoasuv8mqvgx1e8vqp1fqy4suyf649nvqavcsyph7s6lz9yt8p0a6gia1h4k8j1vhc4h0cuwdouybjz94dtu1epl11biykzr3jo2gijs0tref9uq11lf459q5gvid5piezv6q0rkimi2ipzksnqxcrkbyvhq2c7exdtp26y84bxnjd1eod3qx86g9wke3rzda1jrm4semws9fxsjjd0pbuz8eo4nckmwmoozsikbi3kw5xm3alu23kqwieyf4k4cjgx8uk4bp8rjbzxdbmp7c6fk8hi5j7w41gy4arg68sf78q3zjkwlg3ajz4xyllh0yo3iqzc7g8dtiyzvdk0qplhd48a85qm5xt7uewa6ueny49rlgwhircdo5lia4g37tkqncqpehmbeq80aib53ifdzr19bixzqnnavnqdlbhitbfqlju0lb8nx1xgo55c3okpj35dyrsfjzh3nxnst2o7ihup5rkcu3m3tll9j == \x\w\e\d\n\7\d\x\7\8\p\i\m\b\0\f\l\8\u\5\u\t\w\o\q\o\a\s\u\v\8\m\q\v\g\x\1\e\8\v\q\p\1\f\q\y\4\s\u\y\f\6\4\9\n\v\q\a\v\c\s\y\p\h\7\s\6\l\z\9\y\t\8\p\0\a\6\g\i\a\1\h\4\k\8\j\1\v\h\c\4\h\0\c\u\w\d\o\u\y\b\j\z\9\4\d\t\u\1\e\p\l\1\1\b\i\y\k\z\r\3\j\o\2\g\i\j\s\0\t\r\e\f\9\u\q\1\1\l\f\4\5\9\q\5\g\v\i\d\5\p\i\e\z\v\6\q\0\r\k\i\m\i\2\i\p\z\k\s\n\q\x\c\r\k\b\y\v\h\q\2\c\7\e\x\d\t\p\2\6\y\8\4\b\x\n\j\d\1\e\o\d\3\q\x\8\6\g\9\w\k\e\3\r\z\d\a\1\j\r\m\4\s\e\m\w\s\9\f\x\s\j\j\d\0\p\b\u\z\8\e\o\4\n\c\k\m\w\m\o\o\z\s\i\k\b\i\3\k\w\5\x\m\3\a\l\u\2\3\k\q\w\i\e\y\f\4\k\4\c\j\g\x\8\u\k\4\b\p\8\r\j\b\z\x\d\b\m\p\7\c\6\f\k\8\h\i\5\j\7\w\4\1\g\y\4\a\r\g\6\8\s\f\7\8\q\3\z\j\k\w\l\g\3\a\j\z\4\x\y\l\l\h\0\y\o\3\i\q\z\c\7\g\8\d\t\i\y\z\v\d\k\0\q\p\l\h\d\4\8\a\8\5\q\m\5\x\t\7\u\e\w\a\6\u\e\n\y\4\9\r\l\g\w\h\i\r\c\d\o\5\l\i\a\4\g\3\7\t\k\q\n\c\q\p\e\h\m\b\e\q\8\0\a\i\b\5\3\i\f\d\z\r\1\9\b\i\x\z\q\n\n\a\v\n\q\d\l\b\h\i\t\b\f\q\l\j\u\0\l\b\8\n\x\1\x\g\o\5\5\c\3\o\k\p\j\3\5\d\y\r\s\f\j\z\h\3\n\x\n\s\t\2\o\7\i\h\u\p\5\r\k\c\u\3\m\3\t\l\l\9\j ]] 00:10:46.788 09:21:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:46.788 09:21:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:10:47.046 [2024-12-09 09:21:24.538578] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:47.046 [2024-12-09 09:21:24.538668] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60265 ] 00:10:47.046 [2024-12-09 09:21:24.691557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.046 [2024-12-09 09:21:24.741091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.305 [2024-12-09 09:21:24.781835] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:47.305  [2024-12-09T09:21:25.028Z] Copying: 512/512 [B] (average 71 kBps) 00:10:47.305 00:10:47.305 09:21:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ xwedn7dx78pimb0fl8u5utwoqoasuv8mqvgx1e8vqp1fqy4suyf649nvqavcsyph7s6lz9yt8p0a6gia1h4k8j1vhc4h0cuwdouybjz94dtu1epl11biykzr3jo2gijs0tref9uq11lf459q5gvid5piezv6q0rkimi2ipzksnqxcrkbyvhq2c7exdtp26y84bxnjd1eod3qx86g9wke3rzda1jrm4semws9fxsjjd0pbuz8eo4nckmwmoozsikbi3kw5xm3alu23kqwieyf4k4cjgx8uk4bp8rjbzxdbmp7c6fk8hi5j7w41gy4arg68sf78q3zjkwlg3ajz4xyllh0yo3iqzc7g8dtiyzvdk0qplhd48a85qm5xt7uewa6ueny49rlgwhircdo5lia4g37tkqncqpehmbeq80aib53ifdzr19bixzqnnavnqdlbhitbfqlju0lb8nx1xgo55c3okpj35dyrsfjzh3nxnst2o7ihup5rkcu3m3tll9j == \x\w\e\d\n\7\d\x\7\8\p\i\m\b\0\f\l\8\u\5\u\t\w\o\q\o\a\s\u\v\8\m\q\v\g\x\1\e\8\v\q\p\1\f\q\y\4\s\u\y\f\6\4\9\n\v\q\a\v\c\s\y\p\h\7\s\6\l\z\9\y\t\8\p\0\a\6\g\i\a\1\h\4\k\8\j\1\v\h\c\4\h\0\c\u\w\d\o\u\y\b\j\z\9\4\d\t\u\1\e\p\l\1\1\b\i\y\k\z\r\3\j\o\2\g\i\j\s\0\t\r\e\f\9\u\q\1\1\l\f\4\5\9\q\5\g\v\i\d\5\p\i\e\z\v\6\q\0\r\k\i\m\i\2\i\p\z\k\s\n\q\x\c\r\k\b\y\v\h\q\2\c\7\e\x\d\t\p\2\6\y\8\4\b\x\n\j\d\1\e\o\d\3\q\x\8\6\g\9\w\k\e\3\r\z\d\a\1\j\r\m\4\s\e\m\w\s\9\f\x\s\j\j\d\0\p\b\u\z\8\e\o\4\n\c\k\m\w\m\o\o\z\s\i\k\b\i\3\k\w\5\x\m\3\a\l\u\2\3\k\q\w\i\e\y\f\4\k\4\c\j\g\x\8\u\k\4\b\p\8\r\j\b\z\x\d\b\m\p\7\c\6\f\k\8\h\i\5\j\7\w\4\1\g\y\4\a\r\g\6\8\s\f\7\8\q\3\z\j\k\w\l\g\3\a\j\z\4\x\y\l\l\h\0\y\o\3\i\q\z\c\7\g\8\d\t\i\y\z\v\d\k\0\q\p\l\h\d\4\8\a\8\5\q\m\5\x\t\7\u\e\w\a\6\u\e\n\y\4\9\r\l\g\w\h\i\r\c\d\o\5\l\i\a\4\g\3\7\t\k\q\n\c\q\p\e\h\m\b\e\q\8\0\a\i\b\5\3\i\f\d\z\r\1\9\b\i\x\z\q\n\n\a\v\n\q\d\l\b\h\i\t\b\f\q\l\j\u\0\l\b\8\n\x\1\x\g\o\5\5\c\3\o\k\p\j\3\5\d\y\r\s\f\j\z\h\3\n\x\n\s\t\2\o\7\i\h\u\p\5\r\k\c\u\3\m\3\t\l\l\9\j ]] 00:10:47.305 09:21:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:47.305 09:21:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:10:47.305 [2024-12-09 09:21:25.014050] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:47.305 [2024-12-09 09:21:25.014247] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60274 ] 00:10:47.574 [2024-12-09 09:21:25.162869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.574 [2024-12-09 09:21:25.210080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.574 [2024-12-09 09:21:25.250414] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:47.574  [2024-12-09T09:21:25.555Z] Copying: 512/512 [B] (average 250 kBps) 00:10:47.832 00:10:47.832 09:21:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ xwedn7dx78pimb0fl8u5utwoqoasuv8mqvgx1e8vqp1fqy4suyf649nvqavcsyph7s6lz9yt8p0a6gia1h4k8j1vhc4h0cuwdouybjz94dtu1epl11biykzr3jo2gijs0tref9uq11lf459q5gvid5piezv6q0rkimi2ipzksnqxcrkbyvhq2c7exdtp26y84bxnjd1eod3qx86g9wke3rzda1jrm4semws9fxsjjd0pbuz8eo4nckmwmoozsikbi3kw5xm3alu23kqwieyf4k4cjgx8uk4bp8rjbzxdbmp7c6fk8hi5j7w41gy4arg68sf78q3zjkwlg3ajz4xyllh0yo3iqzc7g8dtiyzvdk0qplhd48a85qm5xt7uewa6ueny49rlgwhircdo5lia4g37tkqncqpehmbeq80aib53ifdzr19bixzqnnavnqdlbhitbfqlju0lb8nx1xgo55c3okpj35dyrsfjzh3nxnst2o7ihup5rkcu3m3tll9j == \x\w\e\d\n\7\d\x\7\8\p\i\m\b\0\f\l\8\u\5\u\t\w\o\q\o\a\s\u\v\8\m\q\v\g\x\1\e\8\v\q\p\1\f\q\y\4\s\u\y\f\6\4\9\n\v\q\a\v\c\s\y\p\h\7\s\6\l\z\9\y\t\8\p\0\a\6\g\i\a\1\h\4\k\8\j\1\v\h\c\4\h\0\c\u\w\d\o\u\y\b\j\z\9\4\d\t\u\1\e\p\l\1\1\b\i\y\k\z\r\3\j\o\2\g\i\j\s\0\t\r\e\f\9\u\q\1\1\l\f\4\5\9\q\5\g\v\i\d\5\p\i\e\z\v\6\q\0\r\k\i\m\i\2\i\p\z\k\s\n\q\x\c\r\k\b\y\v\h\q\2\c\7\e\x\d\t\p\2\6\y\8\4\b\x\n\j\d\1\e\o\d\3\q\x\8\6\g\9\w\k\e\3\r\z\d\a\1\j\r\m\4\s\e\m\w\s\9\f\x\s\j\j\d\0\p\b\u\z\8\e\o\4\n\c\k\m\w\m\o\o\z\s\i\k\b\i\3\k\w\5\x\m\3\a\l\u\2\3\k\q\w\i\e\y\f\4\k\4\c\j\g\x\8\u\k\4\b\p\8\r\j\b\z\x\d\b\m\p\7\c\6\f\k\8\h\i\5\j\7\w\4\1\g\y\4\a\r\g\6\8\s\f\7\8\q\3\z\j\k\w\l\g\3\a\j\z\4\x\y\l\l\h\0\y\o\3\i\q\z\c\7\g\8\d\t\i\y\z\v\d\k\0\q\p\l\h\d\4\8\a\8\5\q\m\5\x\t\7\u\e\w\a\6\u\e\n\y\4\9\r\l\g\w\h\i\r\c\d\o\5\l\i\a\4\g\3\7\t\k\q\n\c\q\p\e\h\m\b\e\q\8\0\a\i\b\5\3\i\f\d\z\r\1\9\b\i\x\z\q\n\n\a\v\n\q\d\l\b\h\i\t\b\f\q\l\j\u\0\l\b\8\n\x\1\x\g\o\5\5\c\3\o\k\p\j\3\5\d\y\r\s\f\j\z\h\3\n\x\n\s\t\2\o\7\i\h\u\p\5\r\k\c\u\3\m\3\t\l\l\9\j ]] 00:10:47.832 09:21:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:10:47.832 09:21:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:10:47.832 09:21:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:10:47.833 09:21:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:10:47.833 09:21:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:47.833 09:21:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:10:47.833 [2024-12-09 09:21:25.480153] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:47.833 [2024-12-09 09:21:25.480236] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60284 ] 00:10:48.090 [2024-12-09 09:21:25.630021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.090 [2024-12-09 09:21:25.681268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.090 [2024-12-09 09:21:25.721716] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:48.090  [2024-12-09T09:21:26.072Z] Copying: 512/512 [B] (average 500 kBps) 00:10:48.349 00:10:48.349 09:21:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 9lfn8igk2mlt1bq9d0jp1x8do8zrpoa34buy8u3qu98jdopqfy4unjjwddytyp99lqpumztaakoy0t18ly2ihg9vdtj357lewldgrzzhj65ovbkg43zkcc4oyac3sbfc0ra1hz6dperb9vf6i9hejeq1lx2jo57amcfp5t6yt3icwriypbtdht17nzt40x5r9lo8xgaekbiot2eqcz8sc4ew6qsve21s2nfgvse2vvsl0nsw0ooe2sx8s0h27diqtnr9mr9d62si365enk1ciz7udlnfaaon7i0le80sqvwnpprlv0q5ajzi0cc798misihtf4ugaphzl7p9y1lmf6cvxas6deku2b78g86opt2sk8tzx0asqq6zns4nessqmovi99ojojr4eobvg9z4gthb80i0ys1tazl8h71jxk7atpxshf9noc0am98jhu3n46oip8hx8t1og9f2d4r8uyurhb9c8w56i2kl1gqebmgv5p297pdpv3586evy7w5w == \9\l\f\n\8\i\g\k\2\m\l\t\1\b\q\9\d\0\j\p\1\x\8\d\o\8\z\r\p\o\a\3\4\b\u\y\8\u\3\q\u\9\8\j\d\o\p\q\f\y\4\u\n\j\j\w\d\d\y\t\y\p\9\9\l\q\p\u\m\z\t\a\a\k\o\y\0\t\1\8\l\y\2\i\h\g\9\v\d\t\j\3\5\7\l\e\w\l\d\g\r\z\z\h\j\6\5\o\v\b\k\g\4\3\z\k\c\c\4\o\y\a\c\3\s\b\f\c\0\r\a\1\h\z\6\d\p\e\r\b\9\v\f\6\i\9\h\e\j\e\q\1\l\x\2\j\o\5\7\a\m\c\f\p\5\t\6\y\t\3\i\c\w\r\i\y\p\b\t\d\h\t\1\7\n\z\t\4\0\x\5\r\9\l\o\8\x\g\a\e\k\b\i\o\t\2\e\q\c\z\8\s\c\4\e\w\6\q\s\v\e\2\1\s\2\n\f\g\v\s\e\2\v\v\s\l\0\n\s\w\0\o\o\e\2\s\x\8\s\0\h\2\7\d\i\q\t\n\r\9\m\r\9\d\6\2\s\i\3\6\5\e\n\k\1\c\i\z\7\u\d\l\n\f\a\a\o\n\7\i\0\l\e\8\0\s\q\v\w\n\p\p\r\l\v\0\q\5\a\j\z\i\0\c\c\7\9\8\m\i\s\i\h\t\f\4\u\g\a\p\h\z\l\7\p\9\y\1\l\m\f\6\c\v\x\a\s\6\d\e\k\u\2\b\7\8\g\8\6\o\p\t\2\s\k\8\t\z\x\0\a\s\q\q\6\z\n\s\4\n\e\s\s\q\m\o\v\i\9\9\o\j\o\j\r\4\e\o\b\v\g\9\z\4\g\t\h\b\8\0\i\0\y\s\1\t\a\z\l\8\h\7\1\j\x\k\7\a\t\p\x\s\h\f\9\n\o\c\0\a\m\9\8\j\h\u\3\n\4\6\o\i\p\8\h\x\8\t\1\o\g\9\f\2\d\4\r\8\u\y\u\r\h\b\9\c\8\w\5\6\i\2\k\l\1\g\q\e\b\m\g\v\5\p\2\9\7\p\d\p\v\3\5\8\6\e\v\y\7\w\5\w ]] 00:10:48.349 09:21:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:48.349 09:21:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:10:48.349 [2024-12-09 09:21:25.945815] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:48.349 [2024-12-09 09:21:25.945897] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60293 ] 00:10:48.608 [2024-12-09 09:21:26.096272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.608 [2024-12-09 09:21:26.145303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.608 [2024-12-09 09:21:26.185862] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:48.608  [2024-12-09T09:21:26.589Z] Copying: 512/512 [B] (average 500 kBps) 00:10:48.866 00:10:48.867 09:21:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 9lfn8igk2mlt1bq9d0jp1x8do8zrpoa34buy8u3qu98jdopqfy4unjjwddytyp99lqpumztaakoy0t18ly2ihg9vdtj357lewldgrzzhj65ovbkg43zkcc4oyac3sbfc0ra1hz6dperb9vf6i9hejeq1lx2jo57amcfp5t6yt3icwriypbtdht17nzt40x5r9lo8xgaekbiot2eqcz8sc4ew6qsve21s2nfgvse2vvsl0nsw0ooe2sx8s0h27diqtnr9mr9d62si365enk1ciz7udlnfaaon7i0le80sqvwnpprlv0q5ajzi0cc798misihtf4ugaphzl7p9y1lmf6cvxas6deku2b78g86opt2sk8tzx0asqq6zns4nessqmovi99ojojr4eobvg9z4gthb80i0ys1tazl8h71jxk7atpxshf9noc0am98jhu3n46oip8hx8t1og9f2d4r8uyurhb9c8w56i2kl1gqebmgv5p297pdpv3586evy7w5w == \9\l\f\n\8\i\g\k\2\m\l\t\1\b\q\9\d\0\j\p\1\x\8\d\o\8\z\r\p\o\a\3\4\b\u\y\8\u\3\q\u\9\8\j\d\o\p\q\f\y\4\u\n\j\j\w\d\d\y\t\y\p\9\9\l\q\p\u\m\z\t\a\a\k\o\y\0\t\1\8\l\y\2\i\h\g\9\v\d\t\j\3\5\7\l\e\w\l\d\g\r\z\z\h\j\6\5\o\v\b\k\g\4\3\z\k\c\c\4\o\y\a\c\3\s\b\f\c\0\r\a\1\h\z\6\d\p\e\r\b\9\v\f\6\i\9\h\e\j\e\q\1\l\x\2\j\o\5\7\a\m\c\f\p\5\t\6\y\t\3\i\c\w\r\i\y\p\b\t\d\h\t\1\7\n\z\t\4\0\x\5\r\9\l\o\8\x\g\a\e\k\b\i\o\t\2\e\q\c\z\8\s\c\4\e\w\6\q\s\v\e\2\1\s\2\n\f\g\v\s\e\2\v\v\s\l\0\n\s\w\0\o\o\e\2\s\x\8\s\0\h\2\7\d\i\q\t\n\r\9\m\r\9\d\6\2\s\i\3\6\5\e\n\k\1\c\i\z\7\u\d\l\n\f\a\a\o\n\7\i\0\l\e\8\0\s\q\v\w\n\p\p\r\l\v\0\q\5\a\j\z\i\0\c\c\7\9\8\m\i\s\i\h\t\f\4\u\g\a\p\h\z\l\7\p\9\y\1\l\m\f\6\c\v\x\a\s\6\d\e\k\u\2\b\7\8\g\8\6\o\p\t\2\s\k\8\t\z\x\0\a\s\q\q\6\z\n\s\4\n\e\s\s\q\m\o\v\i\9\9\o\j\o\j\r\4\e\o\b\v\g\9\z\4\g\t\h\b\8\0\i\0\y\s\1\t\a\z\l\8\h\7\1\j\x\k\7\a\t\p\x\s\h\f\9\n\o\c\0\a\m\9\8\j\h\u\3\n\4\6\o\i\p\8\h\x\8\t\1\o\g\9\f\2\d\4\r\8\u\y\u\r\h\b\9\c\8\w\5\6\i\2\k\l\1\g\q\e\b\m\g\v\5\p\2\9\7\p\d\p\v\3\5\8\6\e\v\y\7\w\5\w ]] 00:10:48.867 09:21:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:48.867 09:21:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:10:48.867 [2024-12-09 09:21:26.412204] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:48.867 [2024-12-09 09:21:26.412287] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60303 ] 00:10:48.867 [2024-12-09 09:21:26.562582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.125 [2024-12-09 09:21:26.610535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.125 [2024-12-09 09:21:26.651141] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:49.125  [2024-12-09T09:21:26.848Z] Copying: 512/512 [B] (average 250 kBps) 00:10:49.125 00:10:49.125 09:21:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 9lfn8igk2mlt1bq9d0jp1x8do8zrpoa34buy8u3qu98jdopqfy4unjjwddytyp99lqpumztaakoy0t18ly2ihg9vdtj357lewldgrzzhj65ovbkg43zkcc4oyac3sbfc0ra1hz6dperb9vf6i9hejeq1lx2jo57amcfp5t6yt3icwriypbtdht17nzt40x5r9lo8xgaekbiot2eqcz8sc4ew6qsve21s2nfgvse2vvsl0nsw0ooe2sx8s0h27diqtnr9mr9d62si365enk1ciz7udlnfaaon7i0le80sqvwnpprlv0q5ajzi0cc798misihtf4ugaphzl7p9y1lmf6cvxas6deku2b78g86opt2sk8tzx0asqq6zns4nessqmovi99ojojr4eobvg9z4gthb80i0ys1tazl8h71jxk7atpxshf9noc0am98jhu3n46oip8hx8t1og9f2d4r8uyurhb9c8w56i2kl1gqebmgv5p297pdpv3586evy7w5w == \9\l\f\n\8\i\g\k\2\m\l\t\1\b\q\9\d\0\j\p\1\x\8\d\o\8\z\r\p\o\a\3\4\b\u\y\8\u\3\q\u\9\8\j\d\o\p\q\f\y\4\u\n\j\j\w\d\d\y\t\y\p\9\9\l\q\p\u\m\z\t\a\a\k\o\y\0\t\1\8\l\y\2\i\h\g\9\v\d\t\j\3\5\7\l\e\w\l\d\g\r\z\z\h\j\6\5\o\v\b\k\g\4\3\z\k\c\c\4\o\y\a\c\3\s\b\f\c\0\r\a\1\h\z\6\d\p\e\r\b\9\v\f\6\i\9\h\e\j\e\q\1\l\x\2\j\o\5\7\a\m\c\f\p\5\t\6\y\t\3\i\c\w\r\i\y\p\b\t\d\h\t\1\7\n\z\t\4\0\x\5\r\9\l\o\8\x\g\a\e\k\b\i\o\t\2\e\q\c\z\8\s\c\4\e\w\6\q\s\v\e\2\1\s\2\n\f\g\v\s\e\2\v\v\s\l\0\n\s\w\0\o\o\e\2\s\x\8\s\0\h\2\7\d\i\q\t\n\r\9\m\r\9\d\6\2\s\i\3\6\5\e\n\k\1\c\i\z\7\u\d\l\n\f\a\a\o\n\7\i\0\l\e\8\0\s\q\v\w\n\p\p\r\l\v\0\q\5\a\j\z\i\0\c\c\7\9\8\m\i\s\i\h\t\f\4\u\g\a\p\h\z\l\7\p\9\y\1\l\m\f\6\c\v\x\a\s\6\d\e\k\u\2\b\7\8\g\8\6\o\p\t\2\s\k\8\t\z\x\0\a\s\q\q\6\z\n\s\4\n\e\s\s\q\m\o\v\i\9\9\o\j\o\j\r\4\e\o\b\v\g\9\z\4\g\t\h\b\8\0\i\0\y\s\1\t\a\z\l\8\h\7\1\j\x\k\7\a\t\p\x\s\h\f\9\n\o\c\0\a\m\9\8\j\h\u\3\n\4\6\o\i\p\8\h\x\8\t\1\o\g\9\f\2\d\4\r\8\u\y\u\r\h\b\9\c\8\w\5\6\i\2\k\l\1\g\q\e\b\m\g\v\5\p\2\9\7\p\d\p\v\3\5\8\6\e\v\y\7\w\5\w ]] 00:10:49.125 09:21:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:49.125 09:21:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:10:49.384 [2024-12-09 09:21:26.881791] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:49.384 [2024-12-09 09:21:26.882346] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60312 ] 00:10:49.384 [2024-12-09 09:21:27.030712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.384 [2024-12-09 09:21:27.081417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.642 [2024-12-09 09:21:27.122399] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:49.642  [2024-12-09T09:21:27.365Z] Copying: 512/512 [B] (average 250 kBps) 00:10:49.642 00:10:49.642 ************************************ 00:10:49.642 END TEST dd_flags_misc 00:10:49.642 ************************************ 00:10:49.642 09:21:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 9lfn8igk2mlt1bq9d0jp1x8do8zrpoa34buy8u3qu98jdopqfy4unjjwddytyp99lqpumztaakoy0t18ly2ihg9vdtj357lewldgrzzhj65ovbkg43zkcc4oyac3sbfc0ra1hz6dperb9vf6i9hejeq1lx2jo57amcfp5t6yt3icwriypbtdht17nzt40x5r9lo8xgaekbiot2eqcz8sc4ew6qsve21s2nfgvse2vvsl0nsw0ooe2sx8s0h27diqtnr9mr9d62si365enk1ciz7udlnfaaon7i0le80sqvwnpprlv0q5ajzi0cc798misihtf4ugaphzl7p9y1lmf6cvxas6deku2b78g86opt2sk8tzx0asqq6zns4nessqmovi99ojojr4eobvg9z4gthb80i0ys1tazl8h71jxk7atpxshf9noc0am98jhu3n46oip8hx8t1og9f2d4r8uyurhb9c8w56i2kl1gqebmgv5p297pdpv3586evy7w5w == \9\l\f\n\8\i\g\k\2\m\l\t\1\b\q\9\d\0\j\p\1\x\8\d\o\8\z\r\p\o\a\3\4\b\u\y\8\u\3\q\u\9\8\j\d\o\p\q\f\y\4\u\n\j\j\w\d\d\y\t\y\p\9\9\l\q\p\u\m\z\t\a\a\k\o\y\0\t\1\8\l\y\2\i\h\g\9\v\d\t\j\3\5\7\l\e\w\l\d\g\r\z\z\h\j\6\5\o\v\b\k\g\4\3\z\k\c\c\4\o\y\a\c\3\s\b\f\c\0\r\a\1\h\z\6\d\p\e\r\b\9\v\f\6\i\9\h\e\j\e\q\1\l\x\2\j\o\5\7\a\m\c\f\p\5\t\6\y\t\3\i\c\w\r\i\y\p\b\t\d\h\t\1\7\n\z\t\4\0\x\5\r\9\l\o\8\x\g\a\e\k\b\i\o\t\2\e\q\c\z\8\s\c\4\e\w\6\q\s\v\e\2\1\s\2\n\f\g\v\s\e\2\v\v\s\l\0\n\s\w\0\o\o\e\2\s\x\8\s\0\h\2\7\d\i\q\t\n\r\9\m\r\9\d\6\2\s\i\3\6\5\e\n\k\1\c\i\z\7\u\d\l\n\f\a\a\o\n\7\i\0\l\e\8\0\s\q\v\w\n\p\p\r\l\v\0\q\5\a\j\z\i\0\c\c\7\9\8\m\i\s\i\h\t\f\4\u\g\a\p\h\z\l\7\p\9\y\1\l\m\f\6\c\v\x\a\s\6\d\e\k\u\2\b\7\8\g\8\6\o\p\t\2\s\k\8\t\z\x\0\a\s\q\q\6\z\n\s\4\n\e\s\s\q\m\o\v\i\9\9\o\j\o\j\r\4\e\o\b\v\g\9\z\4\g\t\h\b\8\0\i\0\y\s\1\t\a\z\l\8\h\7\1\j\x\k\7\a\t\p\x\s\h\f\9\n\o\c\0\a\m\9\8\j\h\u\3\n\4\6\o\i\p\8\h\x\8\t\1\o\g\9\f\2\d\4\r\8\u\y\u\r\h\b\9\c\8\w\5\6\i\2\k\l\1\g\q\e\b\m\g\v\5\p\2\9\7\p\d\p\v\3\5\8\6\e\v\y\7\w\5\w ]] 00:10:49.642 00:10:49.642 real 0m3.777s 00:10:49.642 user 0m1.970s 00:10:49.642 sys 0m1.837s 00:10:49.642 09:21:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:49.642 09:21:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:10:49.642 09:21:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:10:49.643 09:21:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:10:49.643 * Second test run, disabling liburing, forcing AIO 00:10:49.643 09:21:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:10:49.643 09:21:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:10:49.643 09:21:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:49.643 09:21:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:49.643 09:21:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:49.901 ************************************ 00:10:49.901 START TEST dd_flag_append_forced_aio 00:10:49.901 ************************************ 00:10:49.901 09:21:27 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1129 -- # append 00:10:49.901 09:21:27 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:10:49.901 09:21:27 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:10:49.901 09:21:27 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:10:49.901 09:21:27 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:10:49.901 09:21:27 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:49.902 09:21:27 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=mwnivf8o2a7a2adtomb8fahklnj3rsd8 00:10:49.902 09:21:27 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:10:49.902 09:21:27 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:10:49.902 09:21:27 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:49.902 09:21:27 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=c7xgb87el1ecyt1jjv358zws7lzomq0s 00:10:49.902 09:21:27 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s mwnivf8o2a7a2adtomb8fahklnj3rsd8 00:10:49.902 09:21:27 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s c7xgb87el1ecyt1jjv358zws7lzomq0s 00:10:49.902 09:21:27 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:10:49.902 [2024-12-09 09:21:27.438582] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:49.902 [2024-12-09 09:21:27.438646] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60341 ] 00:10:49.902 [2024-12-09 09:21:27.588417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.160 [2024-12-09 09:21:27.637059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.160 [2024-12-09 09:21:27.677857] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:50.160  [2024-12-09T09:21:27.883Z] Copying: 32/32 [B] (average 31 kBps) 00:10:50.160 00:10:50.161 09:21:27 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ c7xgb87el1ecyt1jjv358zws7lzomq0smwnivf8o2a7a2adtomb8fahklnj3rsd8 == \c\7\x\g\b\8\7\e\l\1\e\c\y\t\1\j\j\v\3\5\8\z\w\s\7\l\z\o\m\q\0\s\m\w\n\i\v\f\8\o\2\a\7\a\2\a\d\t\o\m\b\8\f\a\h\k\l\n\j\3\r\s\d\8 ]] 00:10:50.161 00:10:50.161 real 0m0.501s 00:10:50.161 user 0m0.254s 00:10:50.161 sys 0m0.126s 00:10:50.161 09:21:27 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:50.161 ************************************ 00:10:50.161 END TEST dd_flag_append_forced_aio 00:10:50.161 ************************************ 00:10:50.161 09:21:27 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:50.419 09:21:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:10:50.419 09:21:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:50.419 09:21:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:50.419 09:21:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:50.419 ************************************ 00:10:50.419 START TEST dd_flag_directory_forced_aio 00:10:50.419 ************************************ 00:10:50.419 09:21:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1129 -- # directory 00:10:50.419 09:21:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:50.419 09:21:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:10:50.419 09:21:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:50.419 09:21:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:50.419 09:21:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:50.419 09:21:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:50.419 09:21:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:50.419 09:21:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:50.419 09:21:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:50.419 09:21:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:50.419 09:21:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:50.419 09:21:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:50.419 [2024-12-09 09:21:28.006397] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:50.419 [2024-12-09 09:21:28.006483] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60367 ] 00:10:50.678 [2024-12-09 09:21:28.156355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.678 [2024-12-09 09:21:28.208145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.678 [2024-12-09 09:21:28.248841] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:50.678 [2024-12-09 09:21:28.277575] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:10:50.678 [2024-12-09 09:21:28.277619] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:10:50.678 [2024-12-09 09:21:28.277631] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:50.678 [2024-12-09 09:21:28.371097] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:10:50.943 09:21:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:10:50.943 09:21:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:50.943 09:21:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:10:50.943 09:21:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:10:50.943 09:21:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:10:50.943 09:21:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:50.943 09:21:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:10:50.943 09:21:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:10:50.943 09:21:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:10:50.943 09:21:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:50.943 09:21:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:50.943 09:21:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:50.943 09:21:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:50.944 09:21:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:50.944 09:21:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:50.944 09:21:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:50.944 09:21:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:50.944 09:21:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:10:50.944 [2024-12-09 09:21:28.486835] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:50.944 [2024-12-09 09:21:28.487147] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60377 ] 00:10:50.944 [2024-12-09 09:21:28.640563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.240 [2024-12-09 09:21:28.690948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.240 [2024-12-09 09:21:28.731691] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:51.240 [2024-12-09 09:21:28.760896] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:10:51.240 [2024-12-09 09:21:28.760943] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:10:51.240 [2024-12-09 09:21:28.760956] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:51.240 [2024-12-09 09:21:28.854775] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:10:51.240 09:21:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:10:51.240 09:21:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:51.240 09:21:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:10:51.240 ************************************ 00:10:51.240 END TEST dd_flag_directory_forced_aio 00:10:51.240 ************************************ 00:10:51.240 09:21:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:10:51.240 09:21:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:10:51.240 09:21:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:51.240 00:10:51.240 real 0m0.962s 00:10:51.240 user 0m0.501s 00:10:51.240 sys 0m0.252s 00:10:51.240 09:21:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:51.240 09:21:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:51.498 09:21:28 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:10:51.498 09:21:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:51.498 09:21:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:51.498 09:21:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:51.498 ************************************ 00:10:51.498 START TEST dd_flag_nofollow_forced_aio 00:10:51.498 ************************************ 00:10:51.498 09:21:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1129 -- # nofollow 00:10:51.498 09:21:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:10:51.498 09:21:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:10:51.498 09:21:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:10:51.498 09:21:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:10:51.498 09:21:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:51.498 09:21:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:10:51.498 09:21:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:51.498 09:21:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:51.498 09:21:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:51.498 09:21:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:51.498 09:21:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:51.498 09:21:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:51.498 09:21:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:51.498 09:21:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:51.499 09:21:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:51.499 09:21:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:51.499 [2024-12-09 09:21:29.052125] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:51.499 [2024-12-09 09:21:29.052196] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60405 ] 00:10:51.499 [2024-12-09 09:21:29.201906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.757 [2024-12-09 09:21:29.253485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.757 [2024-12-09 09:21:29.293958] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:51.757 [2024-12-09 09:21:29.323034] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:10:51.757 [2024-12-09 09:21:29.323082] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:10:51.757 [2024-12-09 09:21:29.323096] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:51.757 [2024-12-09 09:21:29.416912] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:10:51.757 09:21:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:10:51.757 09:21:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:51.757 09:21:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:10:51.757 09:21:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:10:51.757 09:21:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:10:51.757 09:21:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:51.757 09:21:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:10:51.757 09:21:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:10:51.757 09:21:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:10:51.757 09:21:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:51.757 09:21:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:51.757 09:21:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:51.757 09:21:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:51.757 09:21:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:51.757 09:21:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:51.757 09:21:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:51.757 09:21:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:51.757 09:21:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:10:52.015 [2024-12-09 09:21:29.531212] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:52.015 [2024-12-09 09:21:29.531287] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60415 ] 00:10:52.015 [2024-12-09 09:21:29.679383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.015 [2024-12-09 09:21:29.728933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.273 [2024-12-09 09:21:29.769717] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:52.273 [2024-12-09 09:21:29.799416] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:10:52.273 [2024-12-09 09:21:29.799482] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:10:52.273 [2024-12-09 09:21:29.799497] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:52.273 [2024-12-09 09:21:29.893704] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:10:52.273 09:21:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:10:52.273 09:21:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:52.273 09:21:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:10:52.273 09:21:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:10:52.273 09:21:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:10:52.273 09:21:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:52.273 09:21:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:10:52.273 09:21:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:10:52.273 09:21:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:52.273 09:21:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:52.532 [2024-12-09 09:21:30.004978] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:52.532 [2024-12-09 09:21:30.005201] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60417 ] 00:10:52.532 [2024-12-09 09:21:30.155025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.532 [2024-12-09 09:21:30.201812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.532 [2024-12-09 09:21:30.242611] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:52.790  [2024-12-09T09:21:30.514Z] Copying: 512/512 [B] (average 500 kBps) 00:10:52.791 00:10:52.791 ************************************ 00:10:52.791 END TEST dd_flag_nofollow_forced_aio 00:10:52.791 ************************************ 00:10:52.791 09:21:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ efagdfoh88cro411bblc67d2rxgyi3ngbuu6u255qoruban5x8kv65ki6ck114vk7bo3p2tk9lpwad0xyjnnvfdu4ekdeeaky4a0o41a8ueftjvuz5sl1gnmswh2ttwvbragmrtp36flnqhzfg7e4lgq9t8vpxclujlw7n66r9fvhyxbclvb8oy7vdth427t6f2yb2oosiezg9vh3eusdx9y17u9e34e04e17pexh4rtw41cqrt4t66lc35ykt7el2ihbz5an1p9nct0ul36r40nl8b9yiq3q3ce93zfab43qc9a6ytcztcml5g23feivnwbpw2bbv0k2jrc13mmck98j3rzxu1ap0rey1wz2fi63du1z9s6p0b3bi6ygoy5a9z2r5b7atrlac74z0esenuz65xp67l98klg17y25a0l8p4easqmo972ix3oathhkzwra7eu1pk3qg66ubh5nehr84pwpguujc17z25thqbjuuvmkck9ix8w0bmi5vgi == \e\f\a\g\d\f\o\h\8\8\c\r\o\4\1\1\b\b\l\c\6\7\d\2\r\x\g\y\i\3\n\g\b\u\u\6\u\2\5\5\q\o\r\u\b\a\n\5\x\8\k\v\6\5\k\i\6\c\k\1\1\4\v\k\7\b\o\3\p\2\t\k\9\l\p\w\a\d\0\x\y\j\n\n\v\f\d\u\4\e\k\d\e\e\a\k\y\4\a\0\o\4\1\a\8\u\e\f\t\j\v\u\z\5\s\l\1\g\n\m\s\w\h\2\t\t\w\v\b\r\a\g\m\r\t\p\3\6\f\l\n\q\h\z\f\g\7\e\4\l\g\q\9\t\8\v\p\x\c\l\u\j\l\w\7\n\6\6\r\9\f\v\h\y\x\b\c\l\v\b\8\o\y\7\v\d\t\h\4\2\7\t\6\f\2\y\b\2\o\o\s\i\e\z\g\9\v\h\3\e\u\s\d\x\9\y\1\7\u\9\e\3\4\e\0\4\e\1\7\p\e\x\h\4\r\t\w\4\1\c\q\r\t\4\t\6\6\l\c\3\5\y\k\t\7\e\l\2\i\h\b\z\5\a\n\1\p\9\n\c\t\0\u\l\3\6\r\4\0\n\l\8\b\9\y\i\q\3\q\3\c\e\9\3\z\f\a\b\4\3\q\c\9\a\6\y\t\c\z\t\c\m\l\5\g\2\3\f\e\i\v\n\w\b\p\w\2\b\b\v\0\k\2\j\r\c\1\3\m\m\c\k\9\8\j\3\r\z\x\u\1\a\p\0\r\e\y\1\w\z\2\f\i\6\3\d\u\1\z\9\s\6\p\0\b\3\b\i\6\y\g\o\y\5\a\9\z\2\r\5\b\7\a\t\r\l\a\c\7\4\z\0\e\s\e\n\u\z\6\5\x\p\6\7\l\9\8\k\l\g\1\7\y\2\5\a\0\l\8\p\4\e\a\s\q\m\o\9\7\2\i\x\3\o\a\t\h\h\k\z\w\r\a\7\e\u\1\p\k\3\q\g\6\6\u\b\h\5\n\e\h\r\8\4\p\w\p\g\u\u\j\c\1\7\z\2\5\t\h\q\b\j\u\u\v\m\k\c\k\9\i\x\8\w\0\b\m\i\5\v\g\i ]] 00:10:52.791 00:10:52.791 real 0m1.457s 00:10:52.791 user 0m0.746s 00:10:52.791 sys 0m0.382s 00:10:52.791 09:21:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:52.791 09:21:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:52.791 09:21:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:10:52.791 09:21:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:52.791 09:21:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:52.791 09:21:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:52.791 ************************************ 00:10:52.791 START TEST dd_flag_noatime_forced_aio 00:10:52.791 ************************************ 00:10:52.791 09:21:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1129 -- # noatime 00:10:52.791 09:21:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:10:52.791 09:21:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:10:52.791 09:21:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:10:52.791 09:21:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:10:52.791 09:21:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:53.049 09:21:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:53.049 09:21:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1733736090 00:10:53.049 09:21:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:53.049 09:21:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1733736090 00:10:53.049 09:21:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:10:53.985 09:21:31 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:53.985 [2024-12-09 09:21:31.600255] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:53.985 [2024-12-09 09:21:31.600334] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60465 ] 00:10:54.243 [2024-12-09 09:21:31.751046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.243 [2024-12-09 09:21:31.803106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.243 [2024-12-09 09:21:31.845923] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:54.243  [2024-12-09T09:21:32.223Z] Copying: 512/512 [B] (average 500 kBps) 00:10:54.500 00:10:54.500 09:21:32 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:54.500 09:21:32 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1733736090 )) 00:10:54.500 09:21:32 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:54.500 09:21:32 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1733736090 )) 00:10:54.500 09:21:32 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:54.500 [2024-12-09 09:21:32.113158] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:54.500 [2024-12-09 09:21:32.113233] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60471 ] 00:10:54.756 [2024-12-09 09:21:32.269583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.756 [2024-12-09 09:21:32.322289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.756 [2024-12-09 09:21:32.365262] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:54.756  [2024-12-09T09:21:32.736Z] Copying: 512/512 [B] (average 500 kBps) 00:10:55.013 00:10:55.013 09:21:32 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:55.013 09:21:32 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1733736092 )) 00:10:55.013 00:10:55.013 real 0m2.072s 00:10:55.013 user 0m0.547s 00:10:55.013 sys 0m0.285s 00:10:55.013 09:21:32 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:55.013 ************************************ 00:10:55.013 END TEST dd_flag_noatime_forced_aio 00:10:55.013 ************************************ 00:10:55.013 09:21:32 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:55.013 09:21:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:10:55.013 09:21:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:55.013 09:21:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:55.013 09:21:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:55.013 ************************************ 00:10:55.013 START TEST dd_flags_misc_forced_aio 00:10:55.013 ************************************ 00:10:55.013 09:21:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1129 -- # io 00:10:55.013 09:21:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:10:55.013 09:21:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:10:55.013 09:21:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:10:55.013 09:21:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:10:55.013 09:21:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:10:55.013 09:21:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:10:55.013 09:21:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:55.013 09:21:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:55.013 09:21:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:10:55.013 [2024-12-09 09:21:32.731876] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:55.013 [2024-12-09 09:21:32.731960] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60503 ] 00:10:55.271 [2024-12-09 09:21:32.883475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.271 [2024-12-09 09:21:32.931708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.271 [2024-12-09 09:21:32.974424] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:55.530  [2024-12-09T09:21:33.253Z] Copying: 512/512 [B] (average 500 kBps) 00:10:55.530 00:10:55.530 09:21:33 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 50ke82owsmw3bjfeijkyvts1dfa5v2474dgf0gvrzda1m85tfjmf079zp6w61defbhu6edfhcccxgbxbqclgr529a5tkw5mdcyr517y0jl2t7hu1gjm5e9sbpu50rkxlmohpcmen3huxcj7f6que12yd84vbr4y82il2j9a4nt054lylc1dumz4geazthlpo8i3ohspiikxzrxnipeht0yb2b7unsa70amascv6szgaulxr7jesoldnauf2dyny8hpz9wy3t1bzi7owh4fb159hqqzezx09t68aev9qnn9hay432usvlhopin66y9n23fwwtuzmqf4k949y6yshjybk49lyzuchlswm54li87av0vosel9mndbqh5oog3t75xdw9saf7tp2lz3b9z1fzl4h2h4ab0jd55ox5jnjchu1o636m93qn0ckynvyi7l84k63s51unkd8tqj6bbe4dsbq4odnyb4b9ypy2zv1i7xwyrwohkkp6d7aqwjffyf9g == \5\0\k\e\8\2\o\w\s\m\w\3\b\j\f\e\i\j\k\y\v\t\s\1\d\f\a\5\v\2\4\7\4\d\g\f\0\g\v\r\z\d\a\1\m\8\5\t\f\j\m\f\0\7\9\z\p\6\w\6\1\d\e\f\b\h\u\6\e\d\f\h\c\c\c\x\g\b\x\b\q\c\l\g\r\5\2\9\a\5\t\k\w\5\m\d\c\y\r\5\1\7\y\0\j\l\2\t\7\h\u\1\g\j\m\5\e\9\s\b\p\u\5\0\r\k\x\l\m\o\h\p\c\m\e\n\3\h\u\x\c\j\7\f\6\q\u\e\1\2\y\d\8\4\v\b\r\4\y\8\2\i\l\2\j\9\a\4\n\t\0\5\4\l\y\l\c\1\d\u\m\z\4\g\e\a\z\t\h\l\p\o\8\i\3\o\h\s\p\i\i\k\x\z\r\x\n\i\p\e\h\t\0\y\b\2\b\7\u\n\s\a\7\0\a\m\a\s\c\v\6\s\z\g\a\u\l\x\r\7\j\e\s\o\l\d\n\a\u\f\2\d\y\n\y\8\h\p\z\9\w\y\3\t\1\b\z\i\7\o\w\h\4\f\b\1\5\9\h\q\q\z\e\z\x\0\9\t\6\8\a\e\v\9\q\n\n\9\h\a\y\4\3\2\u\s\v\l\h\o\p\i\n\6\6\y\9\n\2\3\f\w\w\t\u\z\m\q\f\4\k\9\4\9\y\6\y\s\h\j\y\b\k\4\9\l\y\z\u\c\h\l\s\w\m\5\4\l\i\8\7\a\v\0\v\o\s\e\l\9\m\n\d\b\q\h\5\o\o\g\3\t\7\5\x\d\w\9\s\a\f\7\t\p\2\l\z\3\b\9\z\1\f\z\l\4\h\2\h\4\a\b\0\j\d\5\5\o\x\5\j\n\j\c\h\u\1\o\6\3\6\m\9\3\q\n\0\c\k\y\n\v\y\i\7\l\8\4\k\6\3\s\5\1\u\n\k\d\8\t\q\j\6\b\b\e\4\d\s\b\q\4\o\d\n\y\b\4\b\9\y\p\y\2\z\v\1\i\7\x\w\y\r\w\o\h\k\k\p\6\d\7\a\q\w\j\f\f\y\f\9\g ]] 00:10:55.530 09:21:33 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:55.530 09:21:33 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:10:55.530 [2024-12-09 09:21:33.218065] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:55.530 [2024-12-09 09:21:33.218133] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60509 ] 00:10:55.788 [2024-12-09 09:21:33.369742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.788 [2024-12-09 09:21:33.423608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.788 [2024-12-09 09:21:33.466481] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:55.788  [2024-12-09T09:21:33.794Z] Copying: 512/512 [B] (average 500 kBps) 00:10:56.071 00:10:56.071 09:21:33 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 50ke82owsmw3bjfeijkyvts1dfa5v2474dgf0gvrzda1m85tfjmf079zp6w61defbhu6edfhcccxgbxbqclgr529a5tkw5mdcyr517y0jl2t7hu1gjm5e9sbpu50rkxlmohpcmen3huxcj7f6que12yd84vbr4y82il2j9a4nt054lylc1dumz4geazthlpo8i3ohspiikxzrxnipeht0yb2b7unsa70amascv6szgaulxr7jesoldnauf2dyny8hpz9wy3t1bzi7owh4fb159hqqzezx09t68aev9qnn9hay432usvlhopin66y9n23fwwtuzmqf4k949y6yshjybk49lyzuchlswm54li87av0vosel9mndbqh5oog3t75xdw9saf7tp2lz3b9z1fzl4h2h4ab0jd55ox5jnjchu1o636m93qn0ckynvyi7l84k63s51unkd8tqj6bbe4dsbq4odnyb4b9ypy2zv1i7xwyrwohkkp6d7aqwjffyf9g == \5\0\k\e\8\2\o\w\s\m\w\3\b\j\f\e\i\j\k\y\v\t\s\1\d\f\a\5\v\2\4\7\4\d\g\f\0\g\v\r\z\d\a\1\m\8\5\t\f\j\m\f\0\7\9\z\p\6\w\6\1\d\e\f\b\h\u\6\e\d\f\h\c\c\c\x\g\b\x\b\q\c\l\g\r\5\2\9\a\5\t\k\w\5\m\d\c\y\r\5\1\7\y\0\j\l\2\t\7\h\u\1\g\j\m\5\e\9\s\b\p\u\5\0\r\k\x\l\m\o\h\p\c\m\e\n\3\h\u\x\c\j\7\f\6\q\u\e\1\2\y\d\8\4\v\b\r\4\y\8\2\i\l\2\j\9\a\4\n\t\0\5\4\l\y\l\c\1\d\u\m\z\4\g\e\a\z\t\h\l\p\o\8\i\3\o\h\s\p\i\i\k\x\z\r\x\n\i\p\e\h\t\0\y\b\2\b\7\u\n\s\a\7\0\a\m\a\s\c\v\6\s\z\g\a\u\l\x\r\7\j\e\s\o\l\d\n\a\u\f\2\d\y\n\y\8\h\p\z\9\w\y\3\t\1\b\z\i\7\o\w\h\4\f\b\1\5\9\h\q\q\z\e\z\x\0\9\t\6\8\a\e\v\9\q\n\n\9\h\a\y\4\3\2\u\s\v\l\h\o\p\i\n\6\6\y\9\n\2\3\f\w\w\t\u\z\m\q\f\4\k\9\4\9\y\6\y\s\h\j\y\b\k\4\9\l\y\z\u\c\h\l\s\w\m\5\4\l\i\8\7\a\v\0\v\o\s\e\l\9\m\n\d\b\q\h\5\o\o\g\3\t\7\5\x\d\w\9\s\a\f\7\t\p\2\l\z\3\b\9\z\1\f\z\l\4\h\2\h\4\a\b\0\j\d\5\5\o\x\5\j\n\j\c\h\u\1\o\6\3\6\m\9\3\q\n\0\c\k\y\n\v\y\i\7\l\8\4\k\6\3\s\5\1\u\n\k\d\8\t\q\j\6\b\b\e\4\d\s\b\q\4\o\d\n\y\b\4\b\9\y\p\y\2\z\v\1\i\7\x\w\y\r\w\o\h\k\k\p\6\d\7\a\q\w\j\f\f\y\f\9\g ]] 00:10:56.071 09:21:33 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:56.071 09:21:33 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:10:56.071 [2024-12-09 09:21:33.722113] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:56.071 [2024-12-09 09:21:33.722187] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60518 ] 00:10:56.330 [2024-12-09 09:21:33.876255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.330 [2024-12-09 09:21:33.927949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.330 [2024-12-09 09:21:33.970674] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:56.330  [2024-12-09T09:21:34.312Z] Copying: 512/512 [B] (average 100 kBps) 00:10:56.589 00:10:56.589 09:21:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 50ke82owsmw3bjfeijkyvts1dfa5v2474dgf0gvrzda1m85tfjmf079zp6w61defbhu6edfhcccxgbxbqclgr529a5tkw5mdcyr517y0jl2t7hu1gjm5e9sbpu50rkxlmohpcmen3huxcj7f6que12yd84vbr4y82il2j9a4nt054lylc1dumz4geazthlpo8i3ohspiikxzrxnipeht0yb2b7unsa70amascv6szgaulxr7jesoldnauf2dyny8hpz9wy3t1bzi7owh4fb159hqqzezx09t68aev9qnn9hay432usvlhopin66y9n23fwwtuzmqf4k949y6yshjybk49lyzuchlswm54li87av0vosel9mndbqh5oog3t75xdw9saf7tp2lz3b9z1fzl4h2h4ab0jd55ox5jnjchu1o636m93qn0ckynvyi7l84k63s51unkd8tqj6bbe4dsbq4odnyb4b9ypy2zv1i7xwyrwohkkp6d7aqwjffyf9g == \5\0\k\e\8\2\o\w\s\m\w\3\b\j\f\e\i\j\k\y\v\t\s\1\d\f\a\5\v\2\4\7\4\d\g\f\0\g\v\r\z\d\a\1\m\8\5\t\f\j\m\f\0\7\9\z\p\6\w\6\1\d\e\f\b\h\u\6\e\d\f\h\c\c\c\x\g\b\x\b\q\c\l\g\r\5\2\9\a\5\t\k\w\5\m\d\c\y\r\5\1\7\y\0\j\l\2\t\7\h\u\1\g\j\m\5\e\9\s\b\p\u\5\0\r\k\x\l\m\o\h\p\c\m\e\n\3\h\u\x\c\j\7\f\6\q\u\e\1\2\y\d\8\4\v\b\r\4\y\8\2\i\l\2\j\9\a\4\n\t\0\5\4\l\y\l\c\1\d\u\m\z\4\g\e\a\z\t\h\l\p\o\8\i\3\o\h\s\p\i\i\k\x\z\r\x\n\i\p\e\h\t\0\y\b\2\b\7\u\n\s\a\7\0\a\m\a\s\c\v\6\s\z\g\a\u\l\x\r\7\j\e\s\o\l\d\n\a\u\f\2\d\y\n\y\8\h\p\z\9\w\y\3\t\1\b\z\i\7\o\w\h\4\f\b\1\5\9\h\q\q\z\e\z\x\0\9\t\6\8\a\e\v\9\q\n\n\9\h\a\y\4\3\2\u\s\v\l\h\o\p\i\n\6\6\y\9\n\2\3\f\w\w\t\u\z\m\q\f\4\k\9\4\9\y\6\y\s\h\j\y\b\k\4\9\l\y\z\u\c\h\l\s\w\m\5\4\l\i\8\7\a\v\0\v\o\s\e\l\9\m\n\d\b\q\h\5\o\o\g\3\t\7\5\x\d\w\9\s\a\f\7\t\p\2\l\z\3\b\9\z\1\f\z\l\4\h\2\h\4\a\b\0\j\d\5\5\o\x\5\j\n\j\c\h\u\1\o\6\3\6\m\9\3\q\n\0\c\k\y\n\v\y\i\7\l\8\4\k\6\3\s\5\1\u\n\k\d\8\t\q\j\6\b\b\e\4\d\s\b\q\4\o\d\n\y\b\4\b\9\y\p\y\2\z\v\1\i\7\x\w\y\r\w\o\h\k\k\p\6\d\7\a\q\w\j\f\f\y\f\9\g ]] 00:10:56.589 09:21:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:56.589 09:21:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:10:56.589 [2024-12-09 09:21:34.235957] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:56.589 [2024-12-09 09:21:34.236024] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60520 ] 00:10:56.847 [2024-12-09 09:21:34.385624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.847 [2024-12-09 09:21:34.437809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.847 [2024-12-09 09:21:34.481165] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:56.847  [2024-12-09T09:21:34.828Z] Copying: 512/512 [B] (average 500 kBps) 00:10:57.105 00:10:57.105 09:21:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 50ke82owsmw3bjfeijkyvts1dfa5v2474dgf0gvrzda1m85tfjmf079zp6w61defbhu6edfhcccxgbxbqclgr529a5tkw5mdcyr517y0jl2t7hu1gjm5e9sbpu50rkxlmohpcmen3huxcj7f6que12yd84vbr4y82il2j9a4nt054lylc1dumz4geazthlpo8i3ohspiikxzrxnipeht0yb2b7unsa70amascv6szgaulxr7jesoldnauf2dyny8hpz9wy3t1bzi7owh4fb159hqqzezx09t68aev9qnn9hay432usvlhopin66y9n23fwwtuzmqf4k949y6yshjybk49lyzuchlswm54li87av0vosel9mndbqh5oog3t75xdw9saf7tp2lz3b9z1fzl4h2h4ab0jd55ox5jnjchu1o636m93qn0ckynvyi7l84k63s51unkd8tqj6bbe4dsbq4odnyb4b9ypy2zv1i7xwyrwohkkp6d7aqwjffyf9g == \5\0\k\e\8\2\o\w\s\m\w\3\b\j\f\e\i\j\k\y\v\t\s\1\d\f\a\5\v\2\4\7\4\d\g\f\0\g\v\r\z\d\a\1\m\8\5\t\f\j\m\f\0\7\9\z\p\6\w\6\1\d\e\f\b\h\u\6\e\d\f\h\c\c\c\x\g\b\x\b\q\c\l\g\r\5\2\9\a\5\t\k\w\5\m\d\c\y\r\5\1\7\y\0\j\l\2\t\7\h\u\1\g\j\m\5\e\9\s\b\p\u\5\0\r\k\x\l\m\o\h\p\c\m\e\n\3\h\u\x\c\j\7\f\6\q\u\e\1\2\y\d\8\4\v\b\r\4\y\8\2\i\l\2\j\9\a\4\n\t\0\5\4\l\y\l\c\1\d\u\m\z\4\g\e\a\z\t\h\l\p\o\8\i\3\o\h\s\p\i\i\k\x\z\r\x\n\i\p\e\h\t\0\y\b\2\b\7\u\n\s\a\7\0\a\m\a\s\c\v\6\s\z\g\a\u\l\x\r\7\j\e\s\o\l\d\n\a\u\f\2\d\y\n\y\8\h\p\z\9\w\y\3\t\1\b\z\i\7\o\w\h\4\f\b\1\5\9\h\q\q\z\e\z\x\0\9\t\6\8\a\e\v\9\q\n\n\9\h\a\y\4\3\2\u\s\v\l\h\o\p\i\n\6\6\y\9\n\2\3\f\w\w\t\u\z\m\q\f\4\k\9\4\9\y\6\y\s\h\j\y\b\k\4\9\l\y\z\u\c\h\l\s\w\m\5\4\l\i\8\7\a\v\0\v\o\s\e\l\9\m\n\d\b\q\h\5\o\o\g\3\t\7\5\x\d\w\9\s\a\f\7\t\p\2\l\z\3\b\9\z\1\f\z\l\4\h\2\h\4\a\b\0\j\d\5\5\o\x\5\j\n\j\c\h\u\1\o\6\3\6\m\9\3\q\n\0\c\k\y\n\v\y\i\7\l\8\4\k\6\3\s\5\1\u\n\k\d\8\t\q\j\6\b\b\e\4\d\s\b\q\4\o\d\n\y\b\4\b\9\y\p\y\2\z\v\1\i\7\x\w\y\r\w\o\h\k\k\p\6\d\7\a\q\w\j\f\f\y\f\9\g ]] 00:10:57.105 09:21:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:10:57.105 09:21:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:10:57.105 09:21:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:10:57.105 09:21:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:57.105 09:21:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:57.105 09:21:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:10:57.105 [2024-12-09 09:21:34.752445] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:57.105 [2024-12-09 09:21:34.752540] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60533 ] 00:10:57.363 [2024-12-09 09:21:34.902835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.363 [2024-12-09 09:21:34.952150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.363 [2024-12-09 09:21:34.995116] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:57.363  [2024-12-09T09:21:35.343Z] Copying: 512/512 [B] (average 500 kBps) 00:10:57.620 00:10:57.620 09:21:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ n7kirlpf20u6sorwi6qv0t6hjmml18lwip20yw89szuz66xmi1lch980vyo08oysobt9jq64c7t4ek4ssxxtrxjlkishja3piscxueooub3lliaaz8v063udvy9c3od7upgms68vm5e9vx5t18xx2v55jbbnh2tm3fdaibpud8rn5gsgo27rqd1jnfergxh9h466ja6qje36rafcfvxmrqlwgn7tb6ht18y4mt34rivpdh26y9fqog6j4p9n85k9ucxdy9w673x9qem3i4s0wq6vtwtz1mthpf7t2jsyhx712xkv06giwwhi3wvjgl9pc08s0lnvsqikq9kl3rigrdxrapsbh27cicve10rno87e720j65hf9tzy0smkscqj0gud8f0976m4e8bgsuv0k21y0pqxenpj36hvzi5mcybvcgzb2ky628mkwaba3e00jvdtmmtxxcdlxlmvghh2ni6rq7q71uhoi5zm6um6hhcnzf88gw9qakwyd2pu06ys == \n\7\k\i\r\l\p\f\2\0\u\6\s\o\r\w\i\6\q\v\0\t\6\h\j\m\m\l\1\8\l\w\i\p\2\0\y\w\8\9\s\z\u\z\6\6\x\m\i\1\l\c\h\9\8\0\v\y\o\0\8\o\y\s\o\b\t\9\j\q\6\4\c\7\t\4\e\k\4\s\s\x\x\t\r\x\j\l\k\i\s\h\j\a\3\p\i\s\c\x\u\e\o\o\u\b\3\l\l\i\a\a\z\8\v\0\6\3\u\d\v\y\9\c\3\o\d\7\u\p\g\m\s\6\8\v\m\5\e\9\v\x\5\t\1\8\x\x\2\v\5\5\j\b\b\n\h\2\t\m\3\f\d\a\i\b\p\u\d\8\r\n\5\g\s\g\o\2\7\r\q\d\1\j\n\f\e\r\g\x\h\9\h\4\6\6\j\a\6\q\j\e\3\6\r\a\f\c\f\v\x\m\r\q\l\w\g\n\7\t\b\6\h\t\1\8\y\4\m\t\3\4\r\i\v\p\d\h\2\6\y\9\f\q\o\g\6\j\4\p\9\n\8\5\k\9\u\c\x\d\y\9\w\6\7\3\x\9\q\e\m\3\i\4\s\0\w\q\6\v\t\w\t\z\1\m\t\h\p\f\7\t\2\j\s\y\h\x\7\1\2\x\k\v\0\6\g\i\w\w\h\i\3\w\v\j\g\l\9\p\c\0\8\s\0\l\n\v\s\q\i\k\q\9\k\l\3\r\i\g\r\d\x\r\a\p\s\b\h\2\7\c\i\c\v\e\1\0\r\n\o\8\7\e\7\2\0\j\6\5\h\f\9\t\z\y\0\s\m\k\s\c\q\j\0\g\u\d\8\f\0\9\7\6\m\4\e\8\b\g\s\u\v\0\k\2\1\y\0\p\q\x\e\n\p\j\3\6\h\v\z\i\5\m\c\y\b\v\c\g\z\b\2\k\y\6\2\8\m\k\w\a\b\a\3\e\0\0\j\v\d\t\m\m\t\x\x\c\d\l\x\l\m\v\g\h\h\2\n\i\6\r\q\7\q\7\1\u\h\o\i\5\z\m\6\u\m\6\h\h\c\n\z\f\8\8\g\w\9\q\a\k\w\y\d\2\p\u\0\6\y\s ]] 00:10:57.620 09:21:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:57.620 09:21:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:10:57.620 [2024-12-09 09:21:35.248455] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:57.620 [2024-12-09 09:21:35.248552] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60537 ] 00:10:57.877 [2024-12-09 09:21:35.400348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.877 [2024-12-09 09:21:35.451448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.877 [2024-12-09 09:21:35.493390] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:57.877  [2024-12-09T09:21:35.858Z] Copying: 512/512 [B] (average 500 kBps) 00:10:58.135 00:10:58.135 09:21:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ n7kirlpf20u6sorwi6qv0t6hjmml18lwip20yw89szuz66xmi1lch980vyo08oysobt9jq64c7t4ek4ssxxtrxjlkishja3piscxueooub3lliaaz8v063udvy9c3od7upgms68vm5e9vx5t18xx2v55jbbnh2tm3fdaibpud8rn5gsgo27rqd1jnfergxh9h466ja6qje36rafcfvxmrqlwgn7tb6ht18y4mt34rivpdh26y9fqog6j4p9n85k9ucxdy9w673x9qem3i4s0wq6vtwtz1mthpf7t2jsyhx712xkv06giwwhi3wvjgl9pc08s0lnvsqikq9kl3rigrdxrapsbh27cicve10rno87e720j65hf9tzy0smkscqj0gud8f0976m4e8bgsuv0k21y0pqxenpj36hvzi5mcybvcgzb2ky628mkwaba3e00jvdtmmtxxcdlxlmvghh2ni6rq7q71uhoi5zm6um6hhcnzf88gw9qakwyd2pu06ys == \n\7\k\i\r\l\p\f\2\0\u\6\s\o\r\w\i\6\q\v\0\t\6\h\j\m\m\l\1\8\l\w\i\p\2\0\y\w\8\9\s\z\u\z\6\6\x\m\i\1\l\c\h\9\8\0\v\y\o\0\8\o\y\s\o\b\t\9\j\q\6\4\c\7\t\4\e\k\4\s\s\x\x\t\r\x\j\l\k\i\s\h\j\a\3\p\i\s\c\x\u\e\o\o\u\b\3\l\l\i\a\a\z\8\v\0\6\3\u\d\v\y\9\c\3\o\d\7\u\p\g\m\s\6\8\v\m\5\e\9\v\x\5\t\1\8\x\x\2\v\5\5\j\b\b\n\h\2\t\m\3\f\d\a\i\b\p\u\d\8\r\n\5\g\s\g\o\2\7\r\q\d\1\j\n\f\e\r\g\x\h\9\h\4\6\6\j\a\6\q\j\e\3\6\r\a\f\c\f\v\x\m\r\q\l\w\g\n\7\t\b\6\h\t\1\8\y\4\m\t\3\4\r\i\v\p\d\h\2\6\y\9\f\q\o\g\6\j\4\p\9\n\8\5\k\9\u\c\x\d\y\9\w\6\7\3\x\9\q\e\m\3\i\4\s\0\w\q\6\v\t\w\t\z\1\m\t\h\p\f\7\t\2\j\s\y\h\x\7\1\2\x\k\v\0\6\g\i\w\w\h\i\3\w\v\j\g\l\9\p\c\0\8\s\0\l\n\v\s\q\i\k\q\9\k\l\3\r\i\g\r\d\x\r\a\p\s\b\h\2\7\c\i\c\v\e\1\0\r\n\o\8\7\e\7\2\0\j\6\5\h\f\9\t\z\y\0\s\m\k\s\c\q\j\0\g\u\d\8\f\0\9\7\6\m\4\e\8\b\g\s\u\v\0\k\2\1\y\0\p\q\x\e\n\p\j\3\6\h\v\z\i\5\m\c\y\b\v\c\g\z\b\2\k\y\6\2\8\m\k\w\a\b\a\3\e\0\0\j\v\d\t\m\m\t\x\x\c\d\l\x\l\m\v\g\h\h\2\n\i\6\r\q\7\q\7\1\u\h\o\i\5\z\m\6\u\m\6\h\h\c\n\z\f\8\8\g\w\9\q\a\k\w\y\d\2\p\u\0\6\y\s ]] 00:10:58.135 09:21:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:58.135 09:21:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:10:58.135 [2024-12-09 09:21:35.742810] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:58.135 [2024-12-09 09:21:35.742881] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60550 ] 00:10:58.396 [2024-12-09 09:21:35.895120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.396 [2024-12-09 09:21:35.946198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.396 [2024-12-09 09:21:35.987793] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:58.396  [2024-12-09T09:21:36.381Z] Copying: 512/512 [B] (average 250 kBps) 00:10:58.658 00:10:58.658 09:21:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ n7kirlpf20u6sorwi6qv0t6hjmml18lwip20yw89szuz66xmi1lch980vyo08oysobt9jq64c7t4ek4ssxxtrxjlkishja3piscxueooub3lliaaz8v063udvy9c3od7upgms68vm5e9vx5t18xx2v55jbbnh2tm3fdaibpud8rn5gsgo27rqd1jnfergxh9h466ja6qje36rafcfvxmrqlwgn7tb6ht18y4mt34rivpdh26y9fqog6j4p9n85k9ucxdy9w673x9qem3i4s0wq6vtwtz1mthpf7t2jsyhx712xkv06giwwhi3wvjgl9pc08s0lnvsqikq9kl3rigrdxrapsbh27cicve10rno87e720j65hf9tzy0smkscqj0gud8f0976m4e8bgsuv0k21y0pqxenpj36hvzi5mcybvcgzb2ky628mkwaba3e00jvdtmmtxxcdlxlmvghh2ni6rq7q71uhoi5zm6um6hhcnzf88gw9qakwyd2pu06ys == \n\7\k\i\r\l\p\f\2\0\u\6\s\o\r\w\i\6\q\v\0\t\6\h\j\m\m\l\1\8\l\w\i\p\2\0\y\w\8\9\s\z\u\z\6\6\x\m\i\1\l\c\h\9\8\0\v\y\o\0\8\o\y\s\o\b\t\9\j\q\6\4\c\7\t\4\e\k\4\s\s\x\x\t\r\x\j\l\k\i\s\h\j\a\3\p\i\s\c\x\u\e\o\o\u\b\3\l\l\i\a\a\z\8\v\0\6\3\u\d\v\y\9\c\3\o\d\7\u\p\g\m\s\6\8\v\m\5\e\9\v\x\5\t\1\8\x\x\2\v\5\5\j\b\b\n\h\2\t\m\3\f\d\a\i\b\p\u\d\8\r\n\5\g\s\g\o\2\7\r\q\d\1\j\n\f\e\r\g\x\h\9\h\4\6\6\j\a\6\q\j\e\3\6\r\a\f\c\f\v\x\m\r\q\l\w\g\n\7\t\b\6\h\t\1\8\y\4\m\t\3\4\r\i\v\p\d\h\2\6\y\9\f\q\o\g\6\j\4\p\9\n\8\5\k\9\u\c\x\d\y\9\w\6\7\3\x\9\q\e\m\3\i\4\s\0\w\q\6\v\t\w\t\z\1\m\t\h\p\f\7\t\2\j\s\y\h\x\7\1\2\x\k\v\0\6\g\i\w\w\h\i\3\w\v\j\g\l\9\p\c\0\8\s\0\l\n\v\s\q\i\k\q\9\k\l\3\r\i\g\r\d\x\r\a\p\s\b\h\2\7\c\i\c\v\e\1\0\r\n\o\8\7\e\7\2\0\j\6\5\h\f\9\t\z\y\0\s\m\k\s\c\q\j\0\g\u\d\8\f\0\9\7\6\m\4\e\8\b\g\s\u\v\0\k\2\1\y\0\p\q\x\e\n\p\j\3\6\h\v\z\i\5\m\c\y\b\v\c\g\z\b\2\k\y\6\2\8\m\k\w\a\b\a\3\e\0\0\j\v\d\t\m\m\t\x\x\c\d\l\x\l\m\v\g\h\h\2\n\i\6\r\q\7\q\7\1\u\h\o\i\5\z\m\6\u\m\6\h\h\c\n\z\f\8\8\g\w\9\q\a\k\w\y\d\2\p\u\0\6\y\s ]] 00:10:58.658 09:21:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:58.658 09:21:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:10:58.658 [2024-12-09 09:21:36.236586] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:58.658 [2024-12-09 09:21:36.236654] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60552 ] 00:10:58.916 [2024-12-09 09:21:36.384886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.916 [2024-12-09 09:21:36.435941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.916 [2024-12-09 09:21:36.477513] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:58.916  [2024-12-09T09:21:36.898Z] Copying: 512/512 [B] (average 500 kBps) 00:10:59.175 00:10:59.175 09:21:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ n7kirlpf20u6sorwi6qv0t6hjmml18lwip20yw89szuz66xmi1lch980vyo08oysobt9jq64c7t4ek4ssxxtrxjlkishja3piscxueooub3lliaaz8v063udvy9c3od7upgms68vm5e9vx5t18xx2v55jbbnh2tm3fdaibpud8rn5gsgo27rqd1jnfergxh9h466ja6qje36rafcfvxmrqlwgn7tb6ht18y4mt34rivpdh26y9fqog6j4p9n85k9ucxdy9w673x9qem3i4s0wq6vtwtz1mthpf7t2jsyhx712xkv06giwwhi3wvjgl9pc08s0lnvsqikq9kl3rigrdxrapsbh27cicve10rno87e720j65hf9tzy0smkscqj0gud8f0976m4e8bgsuv0k21y0pqxenpj36hvzi5mcybvcgzb2ky628mkwaba3e00jvdtmmtxxcdlxlmvghh2ni6rq7q71uhoi5zm6um6hhcnzf88gw9qakwyd2pu06ys == \n\7\k\i\r\l\p\f\2\0\u\6\s\o\r\w\i\6\q\v\0\t\6\h\j\m\m\l\1\8\l\w\i\p\2\0\y\w\8\9\s\z\u\z\6\6\x\m\i\1\l\c\h\9\8\0\v\y\o\0\8\o\y\s\o\b\t\9\j\q\6\4\c\7\t\4\e\k\4\s\s\x\x\t\r\x\j\l\k\i\s\h\j\a\3\p\i\s\c\x\u\e\o\o\u\b\3\l\l\i\a\a\z\8\v\0\6\3\u\d\v\y\9\c\3\o\d\7\u\p\g\m\s\6\8\v\m\5\e\9\v\x\5\t\1\8\x\x\2\v\5\5\j\b\b\n\h\2\t\m\3\f\d\a\i\b\p\u\d\8\r\n\5\g\s\g\o\2\7\r\q\d\1\j\n\f\e\r\g\x\h\9\h\4\6\6\j\a\6\q\j\e\3\6\r\a\f\c\f\v\x\m\r\q\l\w\g\n\7\t\b\6\h\t\1\8\y\4\m\t\3\4\r\i\v\p\d\h\2\6\y\9\f\q\o\g\6\j\4\p\9\n\8\5\k\9\u\c\x\d\y\9\w\6\7\3\x\9\q\e\m\3\i\4\s\0\w\q\6\v\t\w\t\z\1\m\t\h\p\f\7\t\2\j\s\y\h\x\7\1\2\x\k\v\0\6\g\i\w\w\h\i\3\w\v\j\g\l\9\p\c\0\8\s\0\l\n\v\s\q\i\k\q\9\k\l\3\r\i\g\r\d\x\r\a\p\s\b\h\2\7\c\i\c\v\e\1\0\r\n\o\8\7\e\7\2\0\j\6\5\h\f\9\t\z\y\0\s\m\k\s\c\q\j\0\g\u\d\8\f\0\9\7\6\m\4\e\8\b\g\s\u\v\0\k\2\1\y\0\p\q\x\e\n\p\j\3\6\h\v\z\i\5\m\c\y\b\v\c\g\z\b\2\k\y\6\2\8\m\k\w\a\b\a\3\e\0\0\j\v\d\t\m\m\t\x\x\c\d\l\x\l\m\v\g\h\h\2\n\i\6\r\q\7\q\7\1\u\h\o\i\5\z\m\6\u\m\6\h\h\c\n\z\f\8\8\g\w\9\q\a\k\w\y\d\2\p\u\0\6\y\s ]] 00:10:59.175 00:10:59.175 real 0m4.022s 00:10:59.175 user 0m2.081s 00:10:59.175 sys 0m0.980s 00:10:59.175 09:21:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:59.175 ************************************ 00:10:59.175 END TEST dd_flags_misc_forced_aio 00:10:59.175 ************************************ 00:10:59.175 09:21:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:59.175 09:21:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:10:59.175 09:21:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:10:59.175 09:21:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:10:59.175 ************************************ 00:10:59.175 END TEST spdk_dd_posix 00:10:59.175 ************************************ 00:10:59.175 00:10:59.175 real 0m18.634s 00:10:59.175 user 0m8.464s 00:10:59.175 sys 0m5.921s 00:10:59.175 09:21:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:59.175 09:21:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:59.175 09:21:36 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:10:59.175 09:21:36 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:59.175 09:21:36 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:59.175 09:21:36 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:10:59.175 ************************************ 00:10:59.175 START TEST spdk_dd_malloc 00:10:59.175 ************************************ 00:10:59.175 09:21:36 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:10:59.435 * Looking for test storage... 00:10:59.435 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:10:59.435 09:21:36 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:59.435 09:21:36 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # lcov --version 00:10:59.435 09:21:36 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:59.435 09:21:37 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:59.435 09:21:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:59.435 09:21:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:59.435 09:21:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:59.435 09:21:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:10:59.435 09:21:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:10:59.435 09:21:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:10:59.435 09:21:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:10:59.435 09:21:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:10:59.435 09:21:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:10:59.435 09:21:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:10:59.435 09:21:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:59.435 09:21:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:10:59.435 09:21:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:10:59.435 09:21:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:59.435 09:21:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:59.436 09:21:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:10:59.436 09:21:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:10:59.436 09:21:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:59.436 09:21:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:10:59.436 09:21:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:59.436 09:21:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:10:59.436 09:21:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:10:59.436 09:21:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:59.436 09:21:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:10:59.436 09:21:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:59.436 09:21:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:59.436 09:21:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:59.436 09:21:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:10:59.436 09:21:37 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:59.436 09:21:37 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:59.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.436 --rc genhtml_branch_coverage=1 00:10:59.436 --rc genhtml_function_coverage=1 00:10:59.436 --rc genhtml_legend=1 00:10:59.436 --rc geninfo_all_blocks=1 00:10:59.436 --rc geninfo_unexecuted_blocks=1 00:10:59.436 00:10:59.436 ' 00:10:59.436 09:21:37 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:59.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.436 --rc genhtml_branch_coverage=1 00:10:59.436 --rc genhtml_function_coverage=1 00:10:59.436 --rc genhtml_legend=1 00:10:59.436 --rc geninfo_all_blocks=1 00:10:59.436 --rc geninfo_unexecuted_blocks=1 00:10:59.436 00:10:59.436 ' 00:10:59.436 09:21:37 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:59.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.436 --rc genhtml_branch_coverage=1 00:10:59.436 --rc genhtml_function_coverage=1 00:10:59.436 --rc genhtml_legend=1 00:10:59.436 --rc geninfo_all_blocks=1 00:10:59.436 --rc geninfo_unexecuted_blocks=1 00:10:59.436 00:10:59.436 ' 00:10:59.436 09:21:37 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:59.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.436 --rc genhtml_branch_coverage=1 00:10:59.436 --rc genhtml_function_coverage=1 00:10:59.436 --rc genhtml_legend=1 00:10:59.436 --rc geninfo_all_blocks=1 00:10:59.436 --rc geninfo_unexecuted_blocks=1 00:10:59.436 00:10:59.436 ' 00:10:59.436 09:21:37 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:59.436 09:21:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:10:59.436 09:21:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:59.436 09:21:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:59.436 09:21:37 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:59.436 09:21:37 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.436 09:21:37 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.436 09:21:37 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.436 09:21:37 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:10:59.436 09:21:37 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.436 09:21:37 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:10:59.436 09:21:37 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:59.436 09:21:37 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:59.436 09:21:37 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:10:59.436 ************************************ 00:10:59.436 START TEST dd_malloc_copy 00:10:59.436 ************************************ 00:10:59.436 09:21:37 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1129 -- # malloc_copy 00:10:59.436 09:21:37 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:10:59.436 09:21:37 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:10:59.436 09:21:37 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:10:59.436 09:21:37 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:10:59.436 09:21:37 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:10:59.436 09:21:37 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:10:59.436 09:21:37 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:10:59.436 09:21:37 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:10:59.436 09:21:37 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:10:59.436 09:21:37 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:10:59.436 [2024-12-09 09:21:37.118687] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:10:59.436 [2024-12-09 09:21:37.118763] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60634 ] 00:10:59.436 { 00:10:59.436 "subsystems": [ 00:10:59.436 { 00:10:59.436 "subsystem": "bdev", 00:10:59.436 "config": [ 00:10:59.436 { 00:10:59.436 "params": { 00:10:59.436 "block_size": 512, 00:10:59.436 "num_blocks": 1048576, 00:10:59.436 "name": "malloc0" 00:10:59.436 }, 00:10:59.436 "method": "bdev_malloc_create" 00:10:59.436 }, 00:10:59.436 { 00:10:59.436 "params": { 00:10:59.436 "block_size": 512, 00:10:59.436 "num_blocks": 1048576, 00:10:59.436 "name": "malloc1" 00:10:59.436 }, 00:10:59.436 "method": "bdev_malloc_create" 00:10:59.436 }, 00:10:59.436 { 00:10:59.436 "method": "bdev_wait_for_examine" 00:10:59.436 } 00:10:59.436 ] 00:10:59.436 } 00:10:59.436 ] 00:10:59.436 } 00:10:59.696 [2024-12-09 09:21:37.268958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.696 [2024-12-09 09:21:37.321274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.696 [2024-12-09 09:21:37.364068] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:01.071  [2024-12-09T09:21:39.732Z] Copying: 252/512 [MB] (252 MBps) [2024-12-09T09:21:39.732Z] Copying: 503/512 [MB] (251 MBps) [2024-12-09T09:21:40.300Z] Copying: 512/512 [MB] (average 251 MBps) 00:11:02.577 00:11:02.577 09:21:40 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:11:02.577 09:21:40 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:11:02.577 09:21:40 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:02.577 09:21:40 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:11:02.577 [2024-12-09 09:21:40.191644] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:11:02.577 [2024-12-09 09:21:40.191731] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60676 ] 00:11:02.577 { 00:11:02.577 "subsystems": [ 00:11:02.577 { 00:11:02.577 "subsystem": "bdev", 00:11:02.577 "config": [ 00:11:02.577 { 00:11:02.577 "params": { 00:11:02.577 "block_size": 512, 00:11:02.577 "num_blocks": 1048576, 00:11:02.577 "name": "malloc0" 00:11:02.577 }, 00:11:02.577 "method": "bdev_malloc_create" 00:11:02.577 }, 00:11:02.577 { 00:11:02.577 "params": { 00:11:02.577 "block_size": 512, 00:11:02.577 "num_blocks": 1048576, 00:11:02.577 "name": "malloc1" 00:11:02.577 }, 00:11:02.577 "method": "bdev_malloc_create" 00:11:02.577 }, 00:11:02.577 { 00:11:02.577 "method": "bdev_wait_for_examine" 00:11:02.577 } 00:11:02.577 ] 00:11:02.577 } 00:11:02.577 ] 00:11:02.577 } 00:11:02.837 [2024-12-09 09:21:40.343673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.837 [2024-12-09 09:21:40.398669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.837 [2024-12-09 09:21:40.441159] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:04.214  [2024-12-09T09:21:42.874Z] Copying: 252/512 [MB] (252 MBps) [2024-12-09T09:21:42.874Z] Copying: 510/512 [MB] (257 MBps) [2024-12-09T09:21:43.442Z] Copying: 512/512 [MB] (average 255 MBps) 00:11:05.719 00:11:05.719 00:11:05.719 real 0m6.117s 00:11:05.719 user 0m5.269s 00:11:05.719 sys 0m0.699s 00:11:05.719 09:21:43 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:05.719 09:21:43 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:11:05.719 ************************************ 00:11:05.719 END TEST dd_malloc_copy 00:11:05.719 ************************************ 00:11:05.719 ************************************ 00:11:05.719 END TEST spdk_dd_malloc 00:11:05.719 ************************************ 00:11:05.719 00:11:05.719 real 0m6.426s 00:11:05.719 user 0m5.429s 00:11:05.719 sys 0m0.862s 00:11:05.719 09:21:43 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:05.719 09:21:43 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:11:05.719 09:21:43 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:11:05.719 09:21:43 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:05.719 09:21:43 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:05.719 09:21:43 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:11:05.719 ************************************ 00:11:05.719 START TEST spdk_dd_bdev_to_bdev 00:11:05.719 ************************************ 00:11:05.719 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:11:05.719 * Looking for test storage... 00:11:05.977 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:05.977 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:05.977 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # lcov --version 00:11:05.977 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:05.977 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:05.977 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:05.977 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:05.977 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:05.977 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:11:05.977 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:11:05.977 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:11:05.977 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:11:05.977 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:11:05.977 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:11:05.977 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:11:05.977 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:05.977 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:11:05.977 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:11:05.977 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:05.977 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:05.977 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:11:05.977 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:11:05.977 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:05.977 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:11:05.977 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:11:05.977 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:11:05.977 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:11:05.978 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:05.978 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:11:05.978 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:11:05.978 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:05.978 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:05.978 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:11:05.978 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:05.978 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:05.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.978 --rc genhtml_branch_coverage=1 00:11:05.978 --rc genhtml_function_coverage=1 00:11:05.978 --rc genhtml_legend=1 00:11:05.978 --rc geninfo_all_blocks=1 00:11:05.978 --rc geninfo_unexecuted_blocks=1 00:11:05.978 00:11:05.978 ' 00:11:05.978 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:05.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.978 --rc genhtml_branch_coverage=1 00:11:05.978 --rc genhtml_function_coverage=1 00:11:05.978 --rc genhtml_legend=1 00:11:05.978 --rc geninfo_all_blocks=1 00:11:05.978 --rc geninfo_unexecuted_blocks=1 00:11:05.978 00:11:05.978 ' 00:11:05.978 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:05.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.978 --rc genhtml_branch_coverage=1 00:11:05.978 --rc genhtml_function_coverage=1 00:11:05.978 --rc genhtml_legend=1 00:11:05.978 --rc geninfo_all_blocks=1 00:11:05.978 --rc geninfo_unexecuted_blocks=1 00:11:05.978 00:11:05.978 ' 00:11:05.978 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:05.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.978 --rc genhtml_branch_coverage=1 00:11:05.978 --rc genhtml_function_coverage=1 00:11:05.978 --rc genhtml_legend=1 00:11:05.978 --rc geninfo_all_blocks=1 00:11:05.978 --rc geninfo_unexecuted_blocks=1 00:11:05.978 00:11:05.978 ' 00:11:05.978 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:05.978 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:11:05.978 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:05.978 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:05.978 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:05.978 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.978 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.978 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.978 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:11:05.978 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.978 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:11:05.978 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:11:05.978 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:11:05.978 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:11:05.978 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:11:05.978 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:11:05.978 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:11:05.978 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:11:05.978 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:11:05.978 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:11:05.978 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:11:05.978 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:11:05.978 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:11:05.978 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:11:05.978 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:05.978 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:05.978 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:11:05.978 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:11:05.978 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:11:05.978 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:05.978 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:05.978 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:05.978 ************************************ 00:11:05.978 START TEST dd_inflate_file 00:11:05.978 ************************************ 00:11:05.978 09:21:43 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:11:05.978 [2024-12-09 09:21:43.602089] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:11:05.978 [2024-12-09 09:21:43.602167] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60784 ] 00:11:06.236 [2024-12-09 09:21:43.737728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.236 [2024-12-09 09:21:43.798670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.236 [2024-12-09 09:21:43.842540] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:06.236  [2024-12-09T09:21:44.269Z] Copying: 64/64 [MB] (average 1422 MBps) 00:11:06.546 00:11:06.546 00:11:06.546 real 0m0.522s 00:11:06.546 user 0m0.295s 00:11:06.546 sys 0m0.280s 00:11:06.546 09:21:44 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:06.546 09:21:44 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:11:06.546 ************************************ 00:11:06.546 END TEST dd_inflate_file 00:11:06.546 ************************************ 00:11:06.546 09:21:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:11:06.546 09:21:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:11:06.546 09:21:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:11:06.546 09:21:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:11:06.546 09:21:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:11:06.546 09:21:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:06.546 09:21:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:11:06.546 09:21:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:06.546 09:21:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:06.546 ************************************ 00:11:06.546 START TEST dd_copy_to_out_bdev 00:11:06.546 ************************************ 00:11:06.546 09:21:44 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:11:06.546 { 00:11:06.546 "subsystems": [ 00:11:06.546 { 00:11:06.546 "subsystem": "bdev", 00:11:06.546 "config": [ 00:11:06.546 { 00:11:06.546 "params": { 00:11:06.546 "trtype": "pcie", 00:11:06.546 "traddr": "0000:00:10.0", 00:11:06.546 "name": "Nvme0" 00:11:06.546 }, 00:11:06.546 "method": "bdev_nvme_attach_controller" 00:11:06.546 }, 00:11:06.546 { 00:11:06.546 "params": { 00:11:06.546 "trtype": "pcie", 00:11:06.546 "traddr": "0000:00:11.0", 00:11:06.546 "name": "Nvme1" 00:11:06.546 }, 00:11:06.546 "method": "bdev_nvme_attach_controller" 00:11:06.546 }, 00:11:06.546 { 00:11:06.546 "method": "bdev_wait_for_examine" 00:11:06.546 } 00:11:06.546 ] 00:11:06.546 } 00:11:06.546 ] 00:11:06.546 } 00:11:06.546 [2024-12-09 09:21:44.193381] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:11:06.546 [2024-12-09 09:21:44.193489] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60822 ] 00:11:06.866 [2024-12-09 09:21:44.350020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.866 [2024-12-09 09:21:44.419795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.866 [2024-12-09 09:21:44.476042] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:07.802  [2024-12-09T09:21:45.784Z] Copying: 64/64 [MB] (average 84 MBps) 00:11:08.061 00:11:08.061 00:11:08.061 real 0m1.468s 00:11:08.061 user 0m1.233s 00:11:08.061 sys 0m1.097s 00:11:08.061 09:21:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:08.061 09:21:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:08.061 ************************************ 00:11:08.061 END TEST dd_copy_to_out_bdev 00:11:08.061 ************************************ 00:11:08.061 09:21:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:11:08.061 09:21:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:11:08.061 09:21:45 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:08.061 09:21:45 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:08.061 09:21:45 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:08.061 ************************************ 00:11:08.061 START TEST dd_offset_magic 00:11:08.061 ************************************ 00:11:08.061 09:21:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1129 -- # offset_magic 00:11:08.061 09:21:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:11:08.061 09:21:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:11:08.061 09:21:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:11:08.061 09:21:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:11:08.061 09:21:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:11:08.061 09:21:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:11:08.061 09:21:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:11:08.061 09:21:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:11:08.061 [2024-12-09 09:21:45.737856] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:11:08.061 [2024-12-09 09:21:45.737941] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60862 ] 00:11:08.061 { 00:11:08.061 "subsystems": [ 00:11:08.061 { 00:11:08.061 "subsystem": "bdev", 00:11:08.061 "config": [ 00:11:08.061 { 00:11:08.061 "params": { 00:11:08.061 "trtype": "pcie", 00:11:08.061 "traddr": "0000:00:10.0", 00:11:08.061 "name": "Nvme0" 00:11:08.061 }, 00:11:08.061 "method": "bdev_nvme_attach_controller" 00:11:08.061 }, 00:11:08.061 { 00:11:08.061 "params": { 00:11:08.061 "trtype": "pcie", 00:11:08.061 "traddr": "0000:00:11.0", 00:11:08.061 "name": "Nvme1" 00:11:08.061 }, 00:11:08.061 "method": "bdev_nvme_attach_controller" 00:11:08.061 }, 00:11:08.061 { 00:11:08.061 "method": "bdev_wait_for_examine" 00:11:08.061 } 00:11:08.061 ] 00:11:08.061 } 00:11:08.061 ] 00:11:08.061 } 00:11:08.321 [2024-12-09 09:21:45.887902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.321 [2024-12-09 09:21:45.942919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.321 [2024-12-09 09:21:45.985806] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:08.580  [2024-12-09T09:21:46.562Z] Copying: 65/65 [MB] (average 802 MBps) 00:11:08.839 00:11:08.839 09:21:46 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:11:08.839 09:21:46 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:11:08.839 09:21:46 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:11:08.839 09:21:46 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:11:08.839 [2024-12-09 09:21:46.502107] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:11:08.839 [2024-12-09 09:21:46.502188] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60876 ] 00:11:08.839 { 00:11:08.839 "subsystems": [ 00:11:08.839 { 00:11:08.839 "subsystem": "bdev", 00:11:08.839 "config": [ 00:11:08.839 { 00:11:08.839 "params": { 00:11:08.839 "trtype": "pcie", 00:11:08.839 "traddr": "0000:00:10.0", 00:11:08.839 "name": "Nvme0" 00:11:08.839 }, 00:11:08.839 "method": "bdev_nvme_attach_controller" 00:11:08.839 }, 00:11:08.839 { 00:11:08.839 "params": { 00:11:08.839 "trtype": "pcie", 00:11:08.839 "traddr": "0000:00:11.0", 00:11:08.839 "name": "Nvme1" 00:11:08.839 }, 00:11:08.839 "method": "bdev_nvme_attach_controller" 00:11:08.839 }, 00:11:08.839 { 00:11:08.839 "method": "bdev_wait_for_examine" 00:11:08.839 } 00:11:08.839 ] 00:11:08.839 } 00:11:08.839 ] 00:11:08.839 } 00:11:09.098 [2024-12-09 09:21:46.650902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.098 [2024-12-09 09:21:46.703359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.098 [2024-12-09 09:21:46.745836] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:09.357  [2024-12-09T09:21:47.339Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:11:09.616 00:11:09.616 09:21:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:11:09.616 09:21:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:11:09.616 09:21:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:11:09.616 09:21:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:11:09.616 09:21:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:11:09.616 09:21:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:11:09.616 09:21:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:11:09.616 [2024-12-09 09:21:47.142798] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:11:09.616 [2024-12-09 09:21:47.143295] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60898 ] 00:11:09.616 { 00:11:09.616 "subsystems": [ 00:11:09.616 { 00:11:09.616 "subsystem": "bdev", 00:11:09.616 "config": [ 00:11:09.616 { 00:11:09.616 "params": { 00:11:09.616 "trtype": "pcie", 00:11:09.616 "traddr": "0000:00:10.0", 00:11:09.616 "name": "Nvme0" 00:11:09.616 }, 00:11:09.616 "method": "bdev_nvme_attach_controller" 00:11:09.616 }, 00:11:09.616 { 00:11:09.616 "params": { 00:11:09.616 "trtype": "pcie", 00:11:09.616 "traddr": "0000:00:11.0", 00:11:09.616 "name": "Nvme1" 00:11:09.616 }, 00:11:09.616 "method": "bdev_nvme_attach_controller" 00:11:09.616 }, 00:11:09.616 { 00:11:09.616 "method": "bdev_wait_for_examine" 00:11:09.616 } 00:11:09.616 ] 00:11:09.616 } 00:11:09.616 ] 00:11:09.616 } 00:11:09.616 [2024-12-09 09:21:47.302793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.874 [2024-12-09 09:21:47.384219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.874 [2024-12-09 09:21:47.457836] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:10.131  [2024-12-09T09:21:48.112Z] Copying: 65/65 [MB] (average 706 MBps) 00:11:10.389 00:11:10.389 09:21:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:11:10.389 09:21:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:11:10.389 09:21:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:11:10.389 09:21:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:11:10.389 { 00:11:10.389 "subsystems": [ 00:11:10.389 { 00:11:10.389 "subsystem": "bdev", 00:11:10.389 "config": [ 00:11:10.389 { 00:11:10.389 "params": { 00:11:10.389 "trtype": "pcie", 00:11:10.389 "traddr": "0000:00:10.0", 00:11:10.389 "name": "Nvme0" 00:11:10.389 }, 00:11:10.389 "method": "bdev_nvme_attach_controller" 00:11:10.389 }, 00:11:10.389 { 00:11:10.389 "params": { 00:11:10.389 "trtype": "pcie", 00:11:10.389 "traddr": "0000:00:11.0", 00:11:10.389 "name": "Nvme1" 00:11:10.389 }, 00:11:10.389 "method": "bdev_nvme_attach_controller" 00:11:10.389 }, 00:11:10.389 { 00:11:10.389 "method": "bdev_wait_for_examine" 00:11:10.389 } 00:11:10.389 ] 00:11:10.389 } 00:11:10.389 ] 00:11:10.389 } 00:11:10.389 [2024-12-09 09:21:48.006228] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:11:10.389 [2024-12-09 09:21:48.006305] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60913 ] 00:11:10.646 [2024-12-09 09:21:48.155775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.646 [2024-12-09 09:21:48.209589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.646 [2024-12-09 09:21:48.251561] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:10.903  [2024-12-09T09:21:48.626Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:11:10.903 00:11:10.903 09:21:48 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:11:10.903 09:21:48 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:11:10.903 00:11:10.903 real 0m2.904s 00:11:10.903 user 0m2.025s 00:11:10.903 sys 0m0.898s 00:11:10.903 09:21:48 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:10.903 09:21:48 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:11:10.903 ************************************ 00:11:10.903 END TEST dd_offset_magic 00:11:10.903 ************************************ 00:11:11.160 09:21:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:11:11.160 09:21:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:11:11.160 09:21:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:11:11.160 09:21:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:11:11.160 09:21:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:11:11.160 09:21:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:11:11.160 09:21:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:11:11.160 09:21:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:11:11.160 09:21:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:11:11.160 09:21:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:11:11.160 09:21:48 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:11.160 { 00:11:11.160 "subsystems": [ 00:11:11.160 { 00:11:11.160 "subsystem": "bdev", 00:11:11.160 "config": [ 00:11:11.160 { 00:11:11.160 "params": { 00:11:11.160 "trtype": "pcie", 00:11:11.160 "traddr": "0000:00:10.0", 00:11:11.160 "name": "Nvme0" 00:11:11.160 }, 00:11:11.160 "method": "bdev_nvme_attach_controller" 00:11:11.160 }, 00:11:11.160 { 00:11:11.160 "params": { 00:11:11.160 "trtype": "pcie", 00:11:11.160 "traddr": "0000:00:11.0", 00:11:11.160 "name": "Nvme1" 00:11:11.161 }, 00:11:11.161 "method": "bdev_nvme_attach_controller" 00:11:11.161 }, 00:11:11.161 { 00:11:11.161 "method": "bdev_wait_for_examine" 00:11:11.161 } 00:11:11.161 ] 00:11:11.161 } 00:11:11.161 ] 00:11:11.161 } 00:11:11.161 [2024-12-09 09:21:48.689102] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:11:11.161 [2024-12-09 09:21:48.689170] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60944 ] 00:11:11.161 [2024-12-09 09:21:48.842072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.418 [2024-12-09 09:21:48.895736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.418 [2024-12-09 09:21:48.937334] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:11.418  [2024-12-09T09:21:49.398Z] Copying: 5120/5120 [kB] (average 1000 MBps) 00:11:11.675 00:11:11.675 09:21:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:11:11.675 09:21:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:11:11.675 09:21:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:11:11.675 09:21:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:11:11.675 09:21:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:11:11.675 09:21:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:11:11.675 09:21:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:11:11.675 09:21:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:11:11.675 09:21:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:11:11.675 09:21:49 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:11.675 [2024-12-09 09:21:49.322834] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:11:11.675 [2024-12-09 09:21:49.322946] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60965 ] 00:11:11.675 { 00:11:11.675 "subsystems": [ 00:11:11.675 { 00:11:11.675 "subsystem": "bdev", 00:11:11.675 "config": [ 00:11:11.675 { 00:11:11.675 "params": { 00:11:11.675 "trtype": "pcie", 00:11:11.675 "traddr": "0000:00:10.0", 00:11:11.675 "name": "Nvme0" 00:11:11.675 }, 00:11:11.675 "method": "bdev_nvme_attach_controller" 00:11:11.675 }, 00:11:11.675 { 00:11:11.675 "params": { 00:11:11.675 "trtype": "pcie", 00:11:11.675 "traddr": "0000:00:11.0", 00:11:11.675 "name": "Nvme1" 00:11:11.675 }, 00:11:11.675 "method": "bdev_nvme_attach_controller" 00:11:11.675 }, 00:11:11.675 { 00:11:11.675 "method": "bdev_wait_for_examine" 00:11:11.675 } 00:11:11.675 ] 00:11:11.675 } 00:11:11.675 ] 00:11:11.675 } 00:11:11.933 [2024-12-09 09:21:49.479677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.933 [2024-12-09 09:21:49.532410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.933 [2024-12-09 09:21:49.574250] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:12.191  [2024-12-09T09:21:49.914Z] Copying: 5120/5120 [kB] (average 714 MBps) 00:11:12.191 00:11:12.450 09:21:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:11:12.450 00:11:12.450 real 0m6.628s 00:11:12.450 user 0m4.651s 00:11:12.450 sys 0m2.998s 00:11:12.450 ************************************ 00:11:12.450 END TEST spdk_dd_bdev_to_bdev 00:11:12.450 ************************************ 00:11:12.450 09:21:49 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:12.450 09:21:49 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:12.450 09:21:49 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:11:12.450 09:21:49 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:11:12.450 09:21:49 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:12.450 09:21:49 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:12.450 09:21:49 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:11:12.450 ************************************ 00:11:12.450 START TEST spdk_dd_uring 00:11:12.450 ************************************ 00:11:12.450 09:21:50 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:11:12.450 * Looking for test storage... 00:11:12.450 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:12.450 09:21:50 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:12.450 09:21:50 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # lcov --version 00:11:12.450 09:21:50 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:12.740 09:21:50 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:12.740 09:21:50 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:12.740 09:21:50 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:12.740 09:21:50 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:12.740 09:21:50 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:11:12.740 09:21:50 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:11:12.740 09:21:50 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:11:12.740 09:21:50 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:11:12.740 09:21:50 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:11:12.740 09:21:50 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:11:12.740 09:21:50 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:11:12.740 09:21:50 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:12.740 09:21:50 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:11:12.740 09:21:50 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:11:12.740 09:21:50 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:12.740 09:21:50 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:12.740 09:21:50 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:11:12.740 09:21:50 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:11:12.740 09:21:50 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:12.740 09:21:50 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:11:12.740 09:21:50 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:11:12.740 09:21:50 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:11:12.740 09:21:50 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:11:12.740 09:21:50 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:12.740 09:21:50 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:11:12.740 09:21:50 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:11:12.740 09:21:50 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:12.740 09:21:50 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:12.740 09:21:50 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:11:12.740 09:21:50 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:12.740 09:21:50 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:12.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.740 --rc genhtml_branch_coverage=1 00:11:12.740 --rc genhtml_function_coverage=1 00:11:12.740 --rc genhtml_legend=1 00:11:12.740 --rc geninfo_all_blocks=1 00:11:12.740 --rc geninfo_unexecuted_blocks=1 00:11:12.740 00:11:12.740 ' 00:11:12.740 09:21:50 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:12.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.740 --rc genhtml_branch_coverage=1 00:11:12.740 --rc genhtml_function_coverage=1 00:11:12.740 --rc genhtml_legend=1 00:11:12.740 --rc geninfo_all_blocks=1 00:11:12.740 --rc geninfo_unexecuted_blocks=1 00:11:12.740 00:11:12.740 ' 00:11:12.740 09:21:50 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:12.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.740 --rc genhtml_branch_coverage=1 00:11:12.740 --rc genhtml_function_coverage=1 00:11:12.740 --rc genhtml_legend=1 00:11:12.740 --rc geninfo_all_blocks=1 00:11:12.740 --rc geninfo_unexecuted_blocks=1 00:11:12.740 00:11:12.740 ' 00:11:12.740 09:21:50 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:12.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.740 --rc genhtml_branch_coverage=1 00:11:12.740 --rc genhtml_function_coverage=1 00:11:12.740 --rc genhtml_legend=1 00:11:12.740 --rc geninfo_all_blocks=1 00:11:12.740 --rc geninfo_unexecuted_blocks=1 00:11:12.740 00:11:12.740 ' 00:11:12.740 09:21:50 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:12.740 09:21:50 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:11:12.740 09:21:50 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:12.740 09:21:50 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:12.740 09:21:50 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:12.740 09:21:50 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.740 09:21:50 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.740 09:21:50 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.740 09:21:50 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:11:12.740 09:21:50 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.740 09:21:50 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:11:12.741 09:21:50 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:12.741 09:21:50 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:12.741 09:21:50 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:11:12.741 ************************************ 00:11:12.741 START TEST dd_uring_copy 00:11:12.741 ************************************ 00:11:12.741 09:21:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1129 -- # uring_zram_copy 00:11:12.741 09:21:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:11:12.741 09:21:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:11:12.741 09:21:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:11:12.741 09:21:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:11:12.741 09:21:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:11:12.741 09:21:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:11:12.741 09:21:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:11:12.741 09:21:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:11:12.741 09:21:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:11:12.741 09:21:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:11:12.741 09:21:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:11:12.741 09:21:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:11:12.741 09:21:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:11:12.741 09:21:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:11:12.741 09:21:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:11:12.741 09:21:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:11:12.741 09:21:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:11:12.741 09:21:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:11:12.741 09:21:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:11:12.741 09:21:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:11:12.741 09:21:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:11:12.741 09:21:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:11:12.741 09:21:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:11:12.741 09:21:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:11:12.741 09:21:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:12.741 09:21:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=emjnadmom771a9o344pchzmcrpp23vk0h3fjpbmftsu7rxz7e5opt9zd3xm3h80jp7uhuctqspztewkan6l5bqy3k2t1srm2xkdac4nq97k7z0vb49fwwqztmyuhhbi60fxqcg7amco5qbb4ks3j19vlr0qqejmuyjo99mcpvjp8qvc4qa5rd4zrpqv41b1s0zft13lezl8o3ci5m13gmq5ejq1igc3xse0wpdpqe6p43e7bot82ppgjcvojt16v204n8mf5q8gs5xcd9hcxc2pks0ua29g6i3wsbh98l9ys5hz1lhbywj1p4b4xd8e9b59s3zt2cv69tsc00ih0nx96j0cx3zhyl6bddvz7ku6suhoocbdvgh641nox4dl7j4728ttzwmfqp2693fe0pheeqp3eqmnex0nbyqunp6yj0ae4036yiclx6gkk3xq2tf58r74w6uo0p11waa7uyo35go6jy16bhe2jk1kyvslhsesn628qy4gd7gtusycss8fpegbfp93rbugik05ama07tp8jv8ktaozqcqt0ndxueuhbh542i19dwklrsje2irw7b4j1njoe0ksd17t62kyqz0nfg482vs0ve97zg9wi3ioet1rze0128s5a26mj55ih642oe6gen96emhr49h18ws9clv4slrgqzqwo9xvle57ul2omk7io4tcgd8em3bkchgbzufe6icqsl889lnbxdtfrm97jfti329zrt5p7qvmlgsa466gpe59roc6pcwcx6ims8dqgw7kfwfqy91hbs7phn6vk2m6d5ovwc1zpx729olcnkmfgwvlxfy7anmhtgbratanqgh5fzqiyja8nnwt8owvbhp8xh2lp6k6morwavrgmv7ogw9fpbew4jh0n3xlu7evrxdxw7q5xchciv28s0540uoajdux5rwzbt1qa65c5w13zs3twujju9xljkl7ar4760rsrnys08qbsrwpxa9wsvwhxqnghek654bvaa353u4oulnr6y98f 00:11:12.741 09:21:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo emjnadmom771a9o344pchzmcrpp23vk0h3fjpbmftsu7rxz7e5opt9zd3xm3h80jp7uhuctqspztewkan6l5bqy3k2t1srm2xkdac4nq97k7z0vb49fwwqztmyuhhbi60fxqcg7amco5qbb4ks3j19vlr0qqejmuyjo99mcpvjp8qvc4qa5rd4zrpqv41b1s0zft13lezl8o3ci5m13gmq5ejq1igc3xse0wpdpqe6p43e7bot82ppgjcvojt16v204n8mf5q8gs5xcd9hcxc2pks0ua29g6i3wsbh98l9ys5hz1lhbywj1p4b4xd8e9b59s3zt2cv69tsc00ih0nx96j0cx3zhyl6bddvz7ku6suhoocbdvgh641nox4dl7j4728ttzwmfqp2693fe0pheeqp3eqmnex0nbyqunp6yj0ae4036yiclx6gkk3xq2tf58r74w6uo0p11waa7uyo35go6jy16bhe2jk1kyvslhsesn628qy4gd7gtusycss8fpegbfp93rbugik05ama07tp8jv8ktaozqcqt0ndxueuhbh542i19dwklrsje2irw7b4j1njoe0ksd17t62kyqz0nfg482vs0ve97zg9wi3ioet1rze0128s5a26mj55ih642oe6gen96emhr49h18ws9clv4slrgqzqwo9xvle57ul2omk7io4tcgd8em3bkchgbzufe6icqsl889lnbxdtfrm97jfti329zrt5p7qvmlgsa466gpe59roc6pcwcx6ims8dqgw7kfwfqy91hbs7phn6vk2m6d5ovwc1zpx729olcnkmfgwvlxfy7anmhtgbratanqgh5fzqiyja8nnwt8owvbhp8xh2lp6k6morwavrgmv7ogw9fpbew4jh0n3xlu7evrxdxw7q5xchciv28s0540uoajdux5rwzbt1qa65c5w13zs3twujju9xljkl7ar4760rsrnys08qbsrwpxa9wsvwhxqnghek654bvaa353u4oulnr6y98f 00:11:12.741 09:21:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:11:12.741 [2024-12-09 09:21:50.322313] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:11:12.741 [2024-12-09 09:21:50.322393] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61043 ] 00:11:13.000 [2024-12-09 09:21:50.473220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.000 [2024-12-09 09:21:50.528101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.000 [2024-12-09 09:21:50.569673] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:13.566  [2024-12-09T09:21:51.548Z] Copying: 511/511 [MB] (average 1753 MBps) 00:11:13.825 00:11:13.825 09:21:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:11:13.825 09:21:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:11:13.825 09:21:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:13.825 09:21:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:13.825 [2024-12-09 09:21:51.404546] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:11:13.825 [2024-12-09 09:21:51.404628] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61059 ] 00:11:13.825 { 00:11:13.825 "subsystems": [ 00:11:13.825 { 00:11:13.825 "subsystem": "bdev", 00:11:13.825 "config": [ 00:11:13.825 { 00:11:13.825 "params": { 00:11:13.825 "block_size": 512, 00:11:13.825 "num_blocks": 1048576, 00:11:13.825 "name": "malloc0" 00:11:13.825 }, 00:11:13.825 "method": "bdev_malloc_create" 00:11:13.825 }, 00:11:13.825 { 00:11:13.825 "params": { 00:11:13.825 "filename": "/dev/zram1", 00:11:13.825 "name": "uring0" 00:11:13.825 }, 00:11:13.825 "method": "bdev_uring_create" 00:11:13.825 }, 00:11:13.825 { 00:11:13.825 "method": "bdev_wait_for_examine" 00:11:13.825 } 00:11:13.825 ] 00:11:13.825 } 00:11:13.825 ] 00:11:13.825 } 00:11:14.083 [2024-12-09 09:21:51.556727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.083 [2024-12-09 09:21:51.610273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.083 [2024-12-09 09:21:51.652284] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:15.457  [2024-12-09T09:21:54.114Z] Copying: 252/512 [MB] (252 MBps) [2024-12-09T09:21:54.114Z] Copying: 493/512 [MB] (240 MBps) [2024-12-09T09:21:54.375Z] Copying: 512/512 [MB] (average 246 MBps) 00:11:16.652 00:11:16.652 09:21:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:11:16.652 09:21:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:11:16.652 09:21:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:16.652 09:21:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:16.652 { 00:11:16.652 "subsystems": [ 00:11:16.652 { 00:11:16.652 "subsystem": "bdev", 00:11:16.652 "config": [ 00:11:16.652 { 00:11:16.652 "params": { 00:11:16.652 "block_size": 512, 00:11:16.652 "num_blocks": 1048576, 00:11:16.652 "name": "malloc0" 00:11:16.652 }, 00:11:16.652 "method": "bdev_malloc_create" 00:11:16.652 }, 00:11:16.652 { 00:11:16.652 "params": { 00:11:16.652 "filename": "/dev/zram1", 00:11:16.652 "name": "uring0" 00:11:16.652 }, 00:11:16.652 "method": "bdev_uring_create" 00:11:16.652 }, 00:11:16.652 { 00:11:16.652 "method": "bdev_wait_for_examine" 00:11:16.652 } 00:11:16.652 ] 00:11:16.652 } 00:11:16.652 ] 00:11:16.652 } 00:11:16.652 [2024-12-09 09:21:54.271916] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:11:16.652 [2024-12-09 09:21:54.272014] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61098 ] 00:11:16.911 [2024-12-09 09:21:54.417406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.911 [2024-12-09 09:21:54.476742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.911 [2024-12-09 09:21:54.521181] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:18.286  [2024-12-09T09:21:56.955Z] Copying: 188/512 [MB] (188 MBps) [2024-12-09T09:21:57.889Z] Copying: 368/512 [MB] (180 MBps) [2024-12-09T09:21:57.889Z] Copying: 512/512 [MB] (average 179 MBps) 00:11:20.166 00:11:20.167 09:21:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:11:20.167 09:21:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ emjnadmom771a9o344pchzmcrpp23vk0h3fjpbmftsu7rxz7e5opt9zd3xm3h80jp7uhuctqspztewkan6l5bqy3k2t1srm2xkdac4nq97k7z0vb49fwwqztmyuhhbi60fxqcg7amco5qbb4ks3j19vlr0qqejmuyjo99mcpvjp8qvc4qa5rd4zrpqv41b1s0zft13lezl8o3ci5m13gmq5ejq1igc3xse0wpdpqe6p43e7bot82ppgjcvojt16v204n8mf5q8gs5xcd9hcxc2pks0ua29g6i3wsbh98l9ys5hz1lhbywj1p4b4xd8e9b59s3zt2cv69tsc00ih0nx96j0cx3zhyl6bddvz7ku6suhoocbdvgh641nox4dl7j4728ttzwmfqp2693fe0pheeqp3eqmnex0nbyqunp6yj0ae4036yiclx6gkk3xq2tf58r74w6uo0p11waa7uyo35go6jy16bhe2jk1kyvslhsesn628qy4gd7gtusycss8fpegbfp93rbugik05ama07tp8jv8ktaozqcqt0ndxueuhbh542i19dwklrsje2irw7b4j1njoe0ksd17t62kyqz0nfg482vs0ve97zg9wi3ioet1rze0128s5a26mj55ih642oe6gen96emhr49h18ws9clv4slrgqzqwo9xvle57ul2omk7io4tcgd8em3bkchgbzufe6icqsl889lnbxdtfrm97jfti329zrt5p7qvmlgsa466gpe59roc6pcwcx6ims8dqgw7kfwfqy91hbs7phn6vk2m6d5ovwc1zpx729olcnkmfgwvlxfy7anmhtgbratanqgh5fzqiyja8nnwt8owvbhp8xh2lp6k6morwavrgmv7ogw9fpbew4jh0n3xlu7evrxdxw7q5xchciv28s0540uoajdux5rwzbt1qa65c5w13zs3twujju9xljkl7ar4760rsrnys08qbsrwpxa9wsvwhxqnghek654bvaa353u4oulnr6y98f == \e\m\j\n\a\d\m\o\m\7\7\1\a\9\o\3\4\4\p\c\h\z\m\c\r\p\p\2\3\v\k\0\h\3\f\j\p\b\m\f\t\s\u\7\r\x\z\7\e\5\o\p\t\9\z\d\3\x\m\3\h\8\0\j\p\7\u\h\u\c\t\q\s\p\z\t\e\w\k\a\n\6\l\5\b\q\y\3\k\2\t\1\s\r\m\2\x\k\d\a\c\4\n\q\9\7\k\7\z\0\v\b\4\9\f\w\w\q\z\t\m\y\u\h\h\b\i\6\0\f\x\q\c\g\7\a\m\c\o\5\q\b\b\4\k\s\3\j\1\9\v\l\r\0\q\q\e\j\m\u\y\j\o\9\9\m\c\p\v\j\p\8\q\v\c\4\q\a\5\r\d\4\z\r\p\q\v\4\1\b\1\s\0\z\f\t\1\3\l\e\z\l\8\o\3\c\i\5\m\1\3\g\m\q\5\e\j\q\1\i\g\c\3\x\s\e\0\w\p\d\p\q\e\6\p\4\3\e\7\b\o\t\8\2\p\p\g\j\c\v\o\j\t\1\6\v\2\0\4\n\8\m\f\5\q\8\g\s\5\x\c\d\9\h\c\x\c\2\p\k\s\0\u\a\2\9\g\6\i\3\w\s\b\h\9\8\l\9\y\s\5\h\z\1\l\h\b\y\w\j\1\p\4\b\4\x\d\8\e\9\b\5\9\s\3\z\t\2\c\v\6\9\t\s\c\0\0\i\h\0\n\x\9\6\j\0\c\x\3\z\h\y\l\6\b\d\d\v\z\7\k\u\6\s\u\h\o\o\c\b\d\v\g\h\6\4\1\n\o\x\4\d\l\7\j\4\7\2\8\t\t\z\w\m\f\q\p\2\6\9\3\f\e\0\p\h\e\e\q\p\3\e\q\m\n\e\x\0\n\b\y\q\u\n\p\6\y\j\0\a\e\4\0\3\6\y\i\c\l\x\6\g\k\k\3\x\q\2\t\f\5\8\r\7\4\w\6\u\o\0\p\1\1\w\a\a\7\u\y\o\3\5\g\o\6\j\y\1\6\b\h\e\2\j\k\1\k\y\v\s\l\h\s\e\s\n\6\2\8\q\y\4\g\d\7\g\t\u\s\y\c\s\s\8\f\p\e\g\b\f\p\9\3\r\b\u\g\i\k\0\5\a\m\a\0\7\t\p\8\j\v\8\k\t\a\o\z\q\c\q\t\0\n\d\x\u\e\u\h\b\h\5\4\2\i\1\9\d\w\k\l\r\s\j\e\2\i\r\w\7\b\4\j\1\n\j\o\e\0\k\s\d\1\7\t\6\2\k\y\q\z\0\n\f\g\4\8\2\v\s\0\v\e\9\7\z\g\9\w\i\3\i\o\e\t\1\r\z\e\0\1\2\8\s\5\a\2\6\m\j\5\5\i\h\6\4\2\o\e\6\g\e\n\9\6\e\m\h\r\4\9\h\1\8\w\s\9\c\l\v\4\s\l\r\g\q\z\q\w\o\9\x\v\l\e\5\7\u\l\2\o\m\k\7\i\o\4\t\c\g\d\8\e\m\3\b\k\c\h\g\b\z\u\f\e\6\i\c\q\s\l\8\8\9\l\n\b\x\d\t\f\r\m\9\7\j\f\t\i\3\2\9\z\r\t\5\p\7\q\v\m\l\g\s\a\4\6\6\g\p\e\5\9\r\o\c\6\p\c\w\c\x\6\i\m\s\8\d\q\g\w\7\k\f\w\f\q\y\9\1\h\b\s\7\p\h\n\6\v\k\2\m\6\d\5\o\v\w\c\1\z\p\x\7\2\9\o\l\c\n\k\m\f\g\w\v\l\x\f\y\7\a\n\m\h\t\g\b\r\a\t\a\n\q\g\h\5\f\z\q\i\y\j\a\8\n\n\w\t\8\o\w\v\b\h\p\8\x\h\2\l\p\6\k\6\m\o\r\w\a\v\r\g\m\v\7\o\g\w\9\f\p\b\e\w\4\j\h\0\n\3\x\l\u\7\e\v\r\x\d\x\w\7\q\5\x\c\h\c\i\v\2\8\s\0\5\4\0\u\o\a\j\d\u\x\5\r\w\z\b\t\1\q\a\6\5\c\5\w\1\3\z\s\3\t\w\u\j\j\u\9\x\l\j\k\l\7\a\r\4\7\6\0\r\s\r\n\y\s\0\8\q\b\s\r\w\p\x\a\9\w\s\v\w\h\x\q\n\g\h\e\k\6\5\4\b\v\a\a\3\5\3\u\4\o\u\l\n\r\6\y\9\8\f ]] 00:11:20.167 09:21:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:11:20.167 09:21:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ emjnadmom771a9o344pchzmcrpp23vk0h3fjpbmftsu7rxz7e5opt9zd3xm3h80jp7uhuctqspztewkan6l5bqy3k2t1srm2xkdac4nq97k7z0vb49fwwqztmyuhhbi60fxqcg7amco5qbb4ks3j19vlr0qqejmuyjo99mcpvjp8qvc4qa5rd4zrpqv41b1s0zft13lezl8o3ci5m13gmq5ejq1igc3xse0wpdpqe6p43e7bot82ppgjcvojt16v204n8mf5q8gs5xcd9hcxc2pks0ua29g6i3wsbh98l9ys5hz1lhbywj1p4b4xd8e9b59s3zt2cv69tsc00ih0nx96j0cx3zhyl6bddvz7ku6suhoocbdvgh641nox4dl7j4728ttzwmfqp2693fe0pheeqp3eqmnex0nbyqunp6yj0ae4036yiclx6gkk3xq2tf58r74w6uo0p11waa7uyo35go6jy16bhe2jk1kyvslhsesn628qy4gd7gtusycss8fpegbfp93rbugik05ama07tp8jv8ktaozqcqt0ndxueuhbh542i19dwklrsje2irw7b4j1njoe0ksd17t62kyqz0nfg482vs0ve97zg9wi3ioet1rze0128s5a26mj55ih642oe6gen96emhr49h18ws9clv4slrgqzqwo9xvle57ul2omk7io4tcgd8em3bkchgbzufe6icqsl889lnbxdtfrm97jfti329zrt5p7qvmlgsa466gpe59roc6pcwcx6ims8dqgw7kfwfqy91hbs7phn6vk2m6d5ovwc1zpx729olcnkmfgwvlxfy7anmhtgbratanqgh5fzqiyja8nnwt8owvbhp8xh2lp6k6morwavrgmv7ogw9fpbew4jh0n3xlu7evrxdxw7q5xchciv28s0540uoajdux5rwzbt1qa65c5w13zs3twujju9xljkl7ar4760rsrnys08qbsrwpxa9wsvwhxqnghek654bvaa353u4oulnr6y98f == \e\m\j\n\a\d\m\o\m\7\7\1\a\9\o\3\4\4\p\c\h\z\m\c\r\p\p\2\3\v\k\0\h\3\f\j\p\b\m\f\t\s\u\7\r\x\z\7\e\5\o\p\t\9\z\d\3\x\m\3\h\8\0\j\p\7\u\h\u\c\t\q\s\p\z\t\e\w\k\a\n\6\l\5\b\q\y\3\k\2\t\1\s\r\m\2\x\k\d\a\c\4\n\q\9\7\k\7\z\0\v\b\4\9\f\w\w\q\z\t\m\y\u\h\h\b\i\6\0\f\x\q\c\g\7\a\m\c\o\5\q\b\b\4\k\s\3\j\1\9\v\l\r\0\q\q\e\j\m\u\y\j\o\9\9\m\c\p\v\j\p\8\q\v\c\4\q\a\5\r\d\4\z\r\p\q\v\4\1\b\1\s\0\z\f\t\1\3\l\e\z\l\8\o\3\c\i\5\m\1\3\g\m\q\5\e\j\q\1\i\g\c\3\x\s\e\0\w\p\d\p\q\e\6\p\4\3\e\7\b\o\t\8\2\p\p\g\j\c\v\o\j\t\1\6\v\2\0\4\n\8\m\f\5\q\8\g\s\5\x\c\d\9\h\c\x\c\2\p\k\s\0\u\a\2\9\g\6\i\3\w\s\b\h\9\8\l\9\y\s\5\h\z\1\l\h\b\y\w\j\1\p\4\b\4\x\d\8\e\9\b\5\9\s\3\z\t\2\c\v\6\9\t\s\c\0\0\i\h\0\n\x\9\6\j\0\c\x\3\z\h\y\l\6\b\d\d\v\z\7\k\u\6\s\u\h\o\o\c\b\d\v\g\h\6\4\1\n\o\x\4\d\l\7\j\4\7\2\8\t\t\z\w\m\f\q\p\2\6\9\3\f\e\0\p\h\e\e\q\p\3\e\q\m\n\e\x\0\n\b\y\q\u\n\p\6\y\j\0\a\e\4\0\3\6\y\i\c\l\x\6\g\k\k\3\x\q\2\t\f\5\8\r\7\4\w\6\u\o\0\p\1\1\w\a\a\7\u\y\o\3\5\g\o\6\j\y\1\6\b\h\e\2\j\k\1\k\y\v\s\l\h\s\e\s\n\6\2\8\q\y\4\g\d\7\g\t\u\s\y\c\s\s\8\f\p\e\g\b\f\p\9\3\r\b\u\g\i\k\0\5\a\m\a\0\7\t\p\8\j\v\8\k\t\a\o\z\q\c\q\t\0\n\d\x\u\e\u\h\b\h\5\4\2\i\1\9\d\w\k\l\r\s\j\e\2\i\r\w\7\b\4\j\1\n\j\o\e\0\k\s\d\1\7\t\6\2\k\y\q\z\0\n\f\g\4\8\2\v\s\0\v\e\9\7\z\g\9\w\i\3\i\o\e\t\1\r\z\e\0\1\2\8\s\5\a\2\6\m\j\5\5\i\h\6\4\2\o\e\6\g\e\n\9\6\e\m\h\r\4\9\h\1\8\w\s\9\c\l\v\4\s\l\r\g\q\z\q\w\o\9\x\v\l\e\5\7\u\l\2\o\m\k\7\i\o\4\t\c\g\d\8\e\m\3\b\k\c\h\g\b\z\u\f\e\6\i\c\q\s\l\8\8\9\l\n\b\x\d\t\f\r\m\9\7\j\f\t\i\3\2\9\z\r\t\5\p\7\q\v\m\l\g\s\a\4\6\6\g\p\e\5\9\r\o\c\6\p\c\w\c\x\6\i\m\s\8\d\q\g\w\7\k\f\w\f\q\y\9\1\h\b\s\7\p\h\n\6\v\k\2\m\6\d\5\o\v\w\c\1\z\p\x\7\2\9\o\l\c\n\k\m\f\g\w\v\l\x\f\y\7\a\n\m\h\t\g\b\r\a\t\a\n\q\g\h\5\f\z\q\i\y\j\a\8\n\n\w\t\8\o\w\v\b\h\p\8\x\h\2\l\p\6\k\6\m\o\r\w\a\v\r\g\m\v\7\o\g\w\9\f\p\b\e\w\4\j\h\0\n\3\x\l\u\7\e\v\r\x\d\x\w\7\q\5\x\c\h\c\i\v\2\8\s\0\5\4\0\u\o\a\j\d\u\x\5\r\w\z\b\t\1\q\a\6\5\c\5\w\1\3\z\s\3\t\w\u\j\j\u\9\x\l\j\k\l\7\a\r\4\7\6\0\r\s\r\n\y\s\0\8\q\b\s\r\w\p\x\a\9\w\s\v\w\h\x\q\n\g\h\e\k\6\5\4\b\v\a\a\3\5\3\u\4\o\u\l\n\r\6\y\9\8\f ]] 00:11:20.167 09:21:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:11:20.730 09:21:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:11:20.730 09:21:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:11:20.730 09:21:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:20.730 09:21:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:20.730 [2024-12-09 09:21:58.366846] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:11:20.730 [2024-12-09 09:21:58.366951] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61172 ] 00:11:20.730 { 00:11:20.730 "subsystems": [ 00:11:20.730 { 00:11:20.730 "subsystem": "bdev", 00:11:20.730 "config": [ 00:11:20.730 { 00:11:20.730 "params": { 00:11:20.730 "block_size": 512, 00:11:20.730 "num_blocks": 1048576, 00:11:20.730 "name": "malloc0" 00:11:20.730 }, 00:11:20.730 "method": "bdev_malloc_create" 00:11:20.730 }, 00:11:20.730 { 00:11:20.730 "params": { 00:11:20.730 "filename": "/dev/zram1", 00:11:20.730 "name": "uring0" 00:11:20.730 }, 00:11:20.730 "method": "bdev_uring_create" 00:11:20.730 }, 00:11:20.730 { 00:11:20.730 "method": "bdev_wait_for_examine" 00:11:20.730 } 00:11:20.730 ] 00:11:20.730 } 00:11:20.730 ] 00:11:20.730 } 00:11:20.987 [2024-12-09 09:21:58.507862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.987 [2024-12-09 09:21:58.571836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.987 [2024-12-09 09:21:58.618750] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:22.358  [2024-12-09T09:22:01.011Z] Copying: 163/512 [MB] (163 MBps) [2024-12-09T09:22:01.948Z] Copying: 323/512 [MB] (160 MBps) [2024-12-09T09:22:01.948Z] Copying: 506/512 [MB] (182 MBps) [2024-12-09T09:22:02.207Z] Copying: 512/512 [MB] (average 169 MBps) 00:11:24.484 00:11:24.484 09:22:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:11:24.484 09:22:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:11:24.484 09:22:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:11:24.484 09:22:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:11:24.484 09:22:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:11:24.484 09:22:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:24.484 09:22:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:11:24.484 09:22:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:24.484 [2024-12-09 09:22:02.193675] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:11:24.484 [2024-12-09 09:22:02.193752] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61228 ] 00:11:24.484 { 00:11:24.484 "subsystems": [ 00:11:24.484 { 00:11:24.484 "subsystem": "bdev", 00:11:24.484 "config": [ 00:11:24.484 { 00:11:24.484 "params": { 00:11:24.484 "block_size": 512, 00:11:24.484 "num_blocks": 1048576, 00:11:24.484 "name": "malloc0" 00:11:24.484 }, 00:11:24.484 "method": "bdev_malloc_create" 00:11:24.484 }, 00:11:24.484 { 00:11:24.484 "params": { 00:11:24.484 "filename": "/dev/zram1", 00:11:24.484 "name": "uring0" 00:11:24.484 }, 00:11:24.484 "method": "bdev_uring_create" 00:11:24.484 }, 00:11:24.484 { 00:11:24.484 "params": { 00:11:24.484 "name": "uring0" 00:11:24.484 }, 00:11:24.484 "method": "bdev_uring_delete" 00:11:24.484 }, 00:11:24.484 { 00:11:24.484 "method": "bdev_wait_for_examine" 00:11:24.484 } 00:11:24.484 ] 00:11:24.484 } 00:11:24.484 ] 00:11:24.484 } 00:11:24.741 [2024-12-09 09:22:02.342970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.741 [2024-12-09 09:22:02.395984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.741 [2024-12-09 09:22:02.438314] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:25.000  [2024-12-09T09:22:02.982Z] Copying: 0/0 [B] (average 0 Bps) 00:11:25.259 00:11:25.259 09:22:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:11:25.259 09:22:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:11:25.259 09:22:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:11:25.259 09:22:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:25.259 09:22:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # local es=0 00:11:25.259 09:22:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:11:25.259 09:22:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:25.259 09:22:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:25.259 09:22:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:25.259 09:22:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:25.259 09:22:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:25.259 09:22:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:25.259 09:22:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:25.259 09:22:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:25.259 09:22:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:25.259 09:22:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:11:25.517 [2024-12-09 09:22:02.993012] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:11:25.517 [2024-12-09 09:22:02.993097] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61251 ] 00:11:25.517 { 00:11:25.517 "subsystems": [ 00:11:25.517 { 00:11:25.517 "subsystem": "bdev", 00:11:25.517 "config": [ 00:11:25.517 { 00:11:25.517 "params": { 00:11:25.517 "block_size": 512, 00:11:25.517 "num_blocks": 1048576, 00:11:25.517 "name": "malloc0" 00:11:25.517 }, 00:11:25.517 "method": "bdev_malloc_create" 00:11:25.517 }, 00:11:25.517 { 00:11:25.517 "params": { 00:11:25.517 "filename": "/dev/zram1", 00:11:25.517 "name": "uring0" 00:11:25.517 }, 00:11:25.517 "method": "bdev_uring_create" 00:11:25.517 }, 00:11:25.517 { 00:11:25.517 "params": { 00:11:25.517 "name": "uring0" 00:11:25.517 }, 00:11:25.517 "method": "bdev_uring_delete" 00:11:25.517 }, 00:11:25.517 { 00:11:25.517 "method": "bdev_wait_for_examine" 00:11:25.517 } 00:11:25.517 ] 00:11:25.517 } 00:11:25.517 ] 00:11:25.517 } 00:11:25.517 [2024-12-09 09:22:03.145243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.517 [2024-12-09 09:22:03.201617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.775 [2024-12-09 09:22:03.251301] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:25.775 [2024-12-09 09:22:03.441151] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:11:25.775 [2024-12-09 09:22:03.441207] spdk_dd.c: 931:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:11:25.775 [2024-12-09 09:22:03.441216] spdk_dd.c:1088:dd_run: *ERROR*: uring0: No such device 00:11:25.775 [2024-12-09 09:22:03.441226] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:26.034 [2024-12-09 09:22:03.715933] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:11:26.292 09:22:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # es=237 00:11:26.292 09:22:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:26.292 09:22:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@664 -- # es=109 00:11:26.292 09:22:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@665 -- # case "$es" in 00:11:26.292 09:22:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@672 -- # es=1 00:11:26.292 09:22:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:26.292 09:22:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:11:26.292 09:22:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:11:26.292 09:22:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:11:26.292 09:22:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:11:26.292 09:22:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:11:26.292 09:22:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:11:26.292 00:11:26.292 real 0m13.750s 00:11:26.292 user 0m9.152s 00:11:26.292 sys 0m11.795s 00:11:26.292 09:22:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:26.292 ************************************ 00:11:26.292 END TEST dd_uring_copy 00:11:26.292 09:22:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:26.292 ************************************ 00:11:26.552 00:11:26.552 real 0m14.016s 00:11:26.552 user 0m9.285s 00:11:26.552 sys 0m11.943s 00:11:26.552 09:22:04 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:26.552 09:22:04 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:11:26.552 ************************************ 00:11:26.552 END TEST spdk_dd_uring 00:11:26.552 ************************************ 00:11:26.552 09:22:04 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:11:26.552 09:22:04 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:26.552 09:22:04 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:26.552 09:22:04 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:11:26.552 ************************************ 00:11:26.552 START TEST spdk_dd_sparse 00:11:26.552 ************************************ 00:11:26.552 09:22:04 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:11:26.552 * Looking for test storage... 00:11:26.552 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:26.552 09:22:04 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:26.552 09:22:04 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # lcov --version 00:11:26.552 09:22:04 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:26.811 09:22:04 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:26.811 09:22:04 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:26.811 09:22:04 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:26.811 09:22:04 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:26.811 09:22:04 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:11:26.811 09:22:04 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:11:26.811 09:22:04 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:11:26.811 09:22:04 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:11:26.811 09:22:04 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:11:26.811 09:22:04 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:11:26.811 09:22:04 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:11:26.811 09:22:04 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:26.811 09:22:04 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:11:26.811 09:22:04 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:11:26.811 09:22:04 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:26.811 09:22:04 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:26.811 09:22:04 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:11:26.811 09:22:04 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:11:26.811 09:22:04 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:26.811 09:22:04 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:11:26.811 09:22:04 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:11:26.811 09:22:04 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:11:26.811 09:22:04 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:11:26.811 09:22:04 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:26.811 09:22:04 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:11:26.811 09:22:04 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:11:26.811 09:22:04 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:26.811 09:22:04 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:26.811 09:22:04 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:11:26.811 09:22:04 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:26.811 09:22:04 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:26.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.811 --rc genhtml_branch_coverage=1 00:11:26.811 --rc genhtml_function_coverage=1 00:11:26.811 --rc genhtml_legend=1 00:11:26.811 --rc geninfo_all_blocks=1 00:11:26.811 --rc geninfo_unexecuted_blocks=1 00:11:26.811 00:11:26.811 ' 00:11:26.811 09:22:04 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:26.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.811 --rc genhtml_branch_coverage=1 00:11:26.811 --rc genhtml_function_coverage=1 00:11:26.811 --rc genhtml_legend=1 00:11:26.811 --rc geninfo_all_blocks=1 00:11:26.811 --rc geninfo_unexecuted_blocks=1 00:11:26.811 00:11:26.811 ' 00:11:26.811 09:22:04 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:26.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.811 --rc genhtml_branch_coverage=1 00:11:26.811 --rc genhtml_function_coverage=1 00:11:26.811 --rc genhtml_legend=1 00:11:26.811 --rc geninfo_all_blocks=1 00:11:26.811 --rc geninfo_unexecuted_blocks=1 00:11:26.811 00:11:26.811 ' 00:11:26.811 09:22:04 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:26.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.811 --rc genhtml_branch_coverage=1 00:11:26.811 --rc genhtml_function_coverage=1 00:11:26.811 --rc genhtml_legend=1 00:11:26.811 --rc geninfo_all_blocks=1 00:11:26.811 --rc geninfo_unexecuted_blocks=1 00:11:26.811 00:11:26.811 ' 00:11:26.811 09:22:04 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:26.811 09:22:04 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:11:26.811 09:22:04 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:26.811 09:22:04 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:26.811 09:22:04 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:26.811 09:22:04 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.811 09:22:04 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.811 09:22:04 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.811 09:22:04 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:11:26.811 09:22:04 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.811 09:22:04 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:11:26.811 09:22:04 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:11:26.811 09:22:04 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:11:26.811 09:22:04 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:11:26.811 09:22:04 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:11:26.811 09:22:04 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:11:26.811 09:22:04 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:11:26.811 09:22:04 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:11:26.811 09:22:04 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:11:26.811 09:22:04 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:11:26.811 09:22:04 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:11:26.811 1+0 records in 00:11:26.811 1+0 records out 00:11:26.811 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00948466 s, 442 MB/s 00:11:26.811 09:22:04 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:11:26.811 1+0 records in 00:11:26.811 1+0 records out 00:11:26.812 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00998478 s, 420 MB/s 00:11:26.812 09:22:04 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:11:26.812 1+0 records in 00:11:26.812 1+0 records out 00:11:26.812 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0114693 s, 366 MB/s 00:11:26.812 09:22:04 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:11:26.812 09:22:04 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:26.812 09:22:04 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:26.812 09:22:04 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:11:26.812 ************************************ 00:11:26.812 START TEST dd_sparse_file_to_file 00:11:26.812 ************************************ 00:11:26.812 09:22:04 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1129 -- # file_to_file 00:11:26.812 09:22:04 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:11:26.812 09:22:04 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:11:26.812 09:22:04 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:11:26.812 09:22:04 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:11:26.812 09:22:04 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:11:26.812 09:22:04 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:11:26.812 09:22:04 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:11:26.812 09:22:04 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:11:26.812 09:22:04 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:11:26.812 09:22:04 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:11:26.812 [2024-12-09 09:22:04.454356] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:11:26.812 [2024-12-09 09:22:04.454475] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61352 ] 00:11:26.812 { 00:11:26.812 "subsystems": [ 00:11:26.812 { 00:11:26.812 "subsystem": "bdev", 00:11:26.812 "config": [ 00:11:26.812 { 00:11:26.812 "params": { 00:11:26.812 "block_size": 4096, 00:11:26.812 "filename": "dd_sparse_aio_disk", 00:11:26.812 "name": "dd_aio" 00:11:26.812 }, 00:11:26.812 "method": "bdev_aio_create" 00:11:26.812 }, 00:11:26.812 { 00:11:26.812 "params": { 00:11:26.812 "lvs_name": "dd_lvstore", 00:11:26.812 "bdev_name": "dd_aio" 00:11:26.812 }, 00:11:26.812 "method": "bdev_lvol_create_lvstore" 00:11:26.812 }, 00:11:26.812 { 00:11:26.812 "method": "bdev_wait_for_examine" 00:11:26.812 } 00:11:26.812 ] 00:11:26.812 } 00:11:26.812 ] 00:11:26.812 } 00:11:27.070 [2024-12-09 09:22:04.606356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.070 [2024-12-09 09:22:04.659301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.070 [2024-12-09 09:22:04.704646] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:27.329  [2024-12-09T09:22:05.052Z] Copying: 12/36 [MB] (average 705 MBps) 00:11:27.329 00:11:27.329 09:22:04 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:11:27.329 09:22:04 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:11:27.329 09:22:04 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:11:27.329 09:22:04 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:11:27.329 09:22:04 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:11:27.329 09:22:04 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:11:27.329 09:22:04 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:11:27.329 09:22:05 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:11:27.329 09:22:05 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:11:27.329 09:22:05 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:11:27.329 00:11:27.329 real 0m0.614s 00:11:27.329 user 0m0.381s 00:11:27.329 sys 0m0.325s 00:11:27.329 09:22:05 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:27.329 09:22:05 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:11:27.329 ************************************ 00:11:27.329 END TEST dd_sparse_file_to_file 00:11:27.329 ************************************ 00:11:27.587 09:22:05 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:11:27.588 09:22:05 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:27.588 09:22:05 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:27.588 09:22:05 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:11:27.588 ************************************ 00:11:27.588 START TEST dd_sparse_file_to_bdev 00:11:27.588 ************************************ 00:11:27.588 09:22:05 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1129 -- # file_to_bdev 00:11:27.588 09:22:05 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:11:27.588 09:22:05 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:11:27.588 09:22:05 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:11:27.588 09:22:05 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:11:27.588 09:22:05 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:11:27.588 09:22:05 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:11:27.588 09:22:05 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:11:27.588 09:22:05 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:27.588 [2024-12-09 09:22:05.142503] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:11:27.588 [2024-12-09 09:22:05.142629] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61396 ] 00:11:27.588 { 00:11:27.588 "subsystems": [ 00:11:27.588 { 00:11:27.588 "subsystem": "bdev", 00:11:27.588 "config": [ 00:11:27.588 { 00:11:27.588 "params": { 00:11:27.588 "block_size": 4096, 00:11:27.588 "filename": "dd_sparse_aio_disk", 00:11:27.588 "name": "dd_aio" 00:11:27.588 }, 00:11:27.588 "method": "bdev_aio_create" 00:11:27.588 }, 00:11:27.588 { 00:11:27.588 "params": { 00:11:27.588 "lvs_name": "dd_lvstore", 00:11:27.588 "lvol_name": "dd_lvol", 00:11:27.588 "size_in_mib": 36, 00:11:27.588 "thin_provision": true 00:11:27.588 }, 00:11:27.588 "method": "bdev_lvol_create" 00:11:27.588 }, 00:11:27.588 { 00:11:27.588 "method": "bdev_wait_for_examine" 00:11:27.588 } 00:11:27.588 ] 00:11:27.588 } 00:11:27.588 ] 00:11:27.588 } 00:11:27.588 [2024-12-09 09:22:05.302300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.846 [2024-12-09 09:22:05.354834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.846 [2024-12-09 09:22:05.398421] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:27.846  [2024-12-09T09:22:05.828Z] Copying: 12/36 [MB] (average 444 MBps) 00:11:28.105 00:11:28.105 00:11:28.105 real 0m0.588s 00:11:28.105 user 0m0.379s 00:11:28.105 sys 0m0.305s 00:11:28.105 09:22:05 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:28.105 09:22:05 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:28.105 ************************************ 00:11:28.105 END TEST dd_sparse_file_to_bdev 00:11:28.105 ************************************ 00:11:28.105 09:22:05 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:11:28.105 09:22:05 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:28.105 09:22:05 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:28.105 09:22:05 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:11:28.105 ************************************ 00:11:28.105 START TEST dd_sparse_bdev_to_file 00:11:28.105 ************************************ 00:11:28.105 09:22:05 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1129 -- # bdev_to_file 00:11:28.105 09:22:05 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:11:28.105 09:22:05 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:11:28.105 09:22:05 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:11:28.105 09:22:05 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:11:28.105 09:22:05 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:11:28.105 09:22:05 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:11:28.105 09:22:05 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:11:28.105 09:22:05 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:11:28.105 { 00:11:28.105 "subsystems": [ 00:11:28.105 { 00:11:28.105 "subsystem": "bdev", 00:11:28.105 "config": [ 00:11:28.105 { 00:11:28.105 "params": { 00:11:28.105 "block_size": 4096, 00:11:28.105 "filename": "dd_sparse_aio_disk", 00:11:28.105 "name": "dd_aio" 00:11:28.105 }, 00:11:28.105 "method": "bdev_aio_create" 00:11:28.105 }, 00:11:28.105 { 00:11:28.105 "method": "bdev_wait_for_examine" 00:11:28.105 } 00:11:28.105 ] 00:11:28.105 } 00:11:28.105 ] 00:11:28.105 } 00:11:28.105 [2024-12-09 09:22:05.806042] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:11:28.105 [2024-12-09 09:22:05.806121] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61427 ] 00:11:28.364 [2024-12-09 09:22:05.956403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.364 [2024-12-09 09:22:06.008580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.364 [2024-12-09 09:22:06.051812] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:28.621  [2024-12-09T09:22:06.344Z] Copying: 12/36 [MB] (average 750 MBps) 00:11:28.621 00:11:28.621 09:22:06 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:11:28.621 09:22:06 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:11:28.621 09:22:06 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:11:28.621 09:22:06 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:11:28.621 09:22:06 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:11:28.621 09:22:06 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:11:28.880 09:22:06 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:11:28.880 09:22:06 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:11:28.880 09:22:06 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:11:28.880 09:22:06 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:11:28.880 00:11:28.880 real 0m0.616s 00:11:28.880 user 0m0.383s 00:11:28.880 sys 0m0.334s 00:11:28.880 09:22:06 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:28.880 09:22:06 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:11:28.880 ************************************ 00:11:28.880 END TEST dd_sparse_bdev_to_file 00:11:28.880 ************************************ 00:11:28.880 09:22:06 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:11:28.880 09:22:06 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:11:28.880 09:22:06 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:11:28.880 09:22:06 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:11:28.880 09:22:06 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:11:28.880 00:11:28.880 real 0m2.360s 00:11:28.880 user 0m1.366s 00:11:28.880 sys 0m1.284s 00:11:28.880 09:22:06 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:28.880 09:22:06 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:11:28.880 ************************************ 00:11:28.880 END TEST spdk_dd_sparse 00:11:28.880 ************************************ 00:11:28.880 09:22:06 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:11:28.880 09:22:06 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:28.880 09:22:06 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:28.880 09:22:06 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:11:28.880 ************************************ 00:11:28.880 START TEST spdk_dd_negative 00:11:28.880 ************************************ 00:11:28.880 09:22:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:11:29.140 * Looking for test storage... 00:11:29.140 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:29.140 09:22:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:29.140 09:22:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # lcov --version 00:11:29.140 09:22:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:29.140 09:22:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:29.140 09:22:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:29.140 09:22:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:29.140 09:22:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:29.140 09:22:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:11:29.140 09:22:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:11:29.140 09:22:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:11:29.140 09:22:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:11:29.140 09:22:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:11:29.140 09:22:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:11:29.140 09:22:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:11:29.140 09:22:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:29.140 09:22:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:11:29.140 09:22:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:11:29.140 09:22:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:29.140 09:22:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:29.140 09:22:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:11:29.140 09:22:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:11:29.140 09:22:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:29.140 09:22:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:11:29.140 09:22:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:11:29.140 09:22:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:11:29.140 09:22:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:11:29.140 09:22:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:29.140 09:22:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:11:29.140 09:22:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:11:29.140 09:22:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:29.140 09:22:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:29.140 09:22:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:11:29.140 09:22:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:29.140 09:22:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:29.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.140 --rc genhtml_branch_coverage=1 00:11:29.140 --rc genhtml_function_coverage=1 00:11:29.141 --rc genhtml_legend=1 00:11:29.141 --rc geninfo_all_blocks=1 00:11:29.141 --rc geninfo_unexecuted_blocks=1 00:11:29.141 00:11:29.141 ' 00:11:29.141 09:22:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:29.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.141 --rc genhtml_branch_coverage=1 00:11:29.141 --rc genhtml_function_coverage=1 00:11:29.141 --rc genhtml_legend=1 00:11:29.141 --rc geninfo_all_blocks=1 00:11:29.141 --rc geninfo_unexecuted_blocks=1 00:11:29.141 00:11:29.141 ' 00:11:29.141 09:22:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:29.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.141 --rc genhtml_branch_coverage=1 00:11:29.141 --rc genhtml_function_coverage=1 00:11:29.141 --rc genhtml_legend=1 00:11:29.141 --rc geninfo_all_blocks=1 00:11:29.141 --rc geninfo_unexecuted_blocks=1 00:11:29.141 00:11:29.141 ' 00:11:29.141 09:22:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:29.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.141 --rc genhtml_branch_coverage=1 00:11:29.141 --rc genhtml_function_coverage=1 00:11:29.141 --rc genhtml_legend=1 00:11:29.141 --rc geninfo_all_blocks=1 00:11:29.141 --rc geninfo_unexecuted_blocks=1 00:11:29.141 00:11:29.141 ' 00:11:29.141 09:22:06 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:29.141 09:22:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:11:29.141 09:22:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:29.141 09:22:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:29.141 09:22:06 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:29.141 09:22:06 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.141 09:22:06 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.141 09:22:06 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.141 09:22:06 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:11:29.141 09:22:06 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.141 09:22:06 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:29.141 09:22:06 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:29.141 09:22:06 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:29.141 09:22:06 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:29.141 09:22:06 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:11:29.141 09:22:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:29.141 09:22:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.141 09:22:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:29.141 ************************************ 00:11:29.141 START TEST dd_invalid_arguments 00:11:29.141 ************************************ 00:11:29.141 09:22:06 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1129 -- # invalid_arguments 00:11:29.141 09:22:06 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:11:29.141 09:22:06 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # local es=0 00:11:29.141 09:22:06 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:11:29.141 09:22:06 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.141 09:22:06 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:29.141 09:22:06 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.141 09:22:06 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:29.141 09:22:06 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.141 09:22:06 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:29.141 09:22:06 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.141 09:22:06 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:29.141 09:22:06 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:11:29.141 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:11:29.141 00:11:29.141 CPU options: 00:11:29.141 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:11:29.141 (like [0,1,10]) 00:11:29.141 --lcores lcore to CPU mapping list. The list is in the format: 00:11:29.141 [<,lcores[@CPUs]>...] 00:11:29.141 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:11:29.141 Within the group, '-' is used for range separator, 00:11:29.141 ',' is used for single number separator. 00:11:29.141 '( )' can be omitted for single element group, 00:11:29.141 '@' can be omitted if cpus and lcores have the same value 00:11:29.141 --disable-cpumask-locks Disable CPU core lock files. 00:11:29.141 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:11:29.141 pollers in the app support interrupt mode) 00:11:29.141 -p, --main-core main (primary) core for DPDK 00:11:29.141 00:11:29.141 Configuration options: 00:11:29.141 -c, --config, --json JSON config file 00:11:29.141 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:11:29.141 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:11:29.141 --wait-for-rpc wait for RPCs to initialize subsystems 00:11:29.141 --rpcs-allowed comma-separated list of permitted RPCS 00:11:29.141 --json-ignore-init-errors don't exit on invalid config entry 00:11:29.141 00:11:29.141 Memory options: 00:11:29.141 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:11:29.142 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:11:29.142 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:11:29.142 -R, --huge-unlink unlink huge files after initialization 00:11:29.142 -n, --mem-channels number of memory channels used for DPDK 00:11:29.142 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:11:29.142 --msg-mempool-size global message memory pool size in count (default: 262143) 00:11:29.142 --no-huge run without using hugepages 00:11:29.142 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:11:29.142 -i, --shm-id shared memory ID (optional) 00:11:29.142 -g, --single-file-segments force creating just one hugetlbfs file 00:11:29.142 00:11:29.142 PCI options: 00:11:29.142 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:11:29.142 -B, --pci-blocked pci addr to block (can be used more than once) 00:11:29.142 -u, --no-pci disable PCI access 00:11:29.142 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:11:29.142 00:11:29.142 Log options: 00:11:29.142 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:11:29.142 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:11:29.142 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:11:29.142 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:11:29.142 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:11:29.142 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:11:29.142 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:11:29.142 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:11:29.142 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:11:29.142 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:11:29.142 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:11:29.142 --silence-noticelog disable notice level logging to stderr 00:11:29.142 00:11:29.142 Trace options: 00:11:29.142 --num-trace-entries number of trace entries for each core, must be power of 2, 00:11:29.142 setting 0 to disable trace (default 32768) 00:11:29.142 Tracepoints vary in size and can use more than one trace entry. 00:11:29.142 -e, --tpoint-group [:] 00:11:29.142 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:11:29.142 [2024-12-09 09:22:06.821543] spdk_dd.c:1478:main: *ERROR*: Invalid arguments 00:11:29.142 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:11:29.142 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:11:29.142 bdev_raid, scheduler, all). 00:11:29.142 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:11:29.142 a tracepoint group. First tpoint inside a group can be enabled by 00:11:29.142 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:11:29.142 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:11:29.142 in /include/spdk_internal/trace_defs.h 00:11:29.142 00:11:29.142 Other options: 00:11:29.142 -h, --help show this usage 00:11:29.142 -v, --version print SPDK version 00:11:29.142 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:11:29.142 --env-context Opaque context for use of the env implementation 00:11:29.142 00:11:29.142 Application specific: 00:11:29.142 [--------- DD Options ---------] 00:11:29.142 --if Input file. Must specify either --if or --ib. 00:11:29.142 --ib Input bdev. Must specifier either --if or --ib 00:11:29.142 --of Output file. Must specify either --of or --ob. 00:11:29.142 --ob Output bdev. Must specify either --of or --ob. 00:11:29.142 --iflag Input file flags. 00:11:29.142 --oflag Output file flags. 00:11:29.142 --bs I/O unit size (default: 4096) 00:11:29.142 --qd Queue depth (default: 2) 00:11:29.142 --count I/O unit count. The number of I/O units to copy. (default: all) 00:11:29.142 --skip Skip this many I/O units at start of input. (default: 0) 00:11:29.142 --seek Skip this many I/O units at start of output. (default: 0) 00:11:29.142 --aio Force usage of AIO. (by default io_uring is used if available) 00:11:29.142 --sparse Enable hole skipping in input target 00:11:29.142 Available iflag and oflag values: 00:11:29.142 append - append mode 00:11:29.142 direct - use direct I/O for data 00:11:29.142 directory - fail unless a directory 00:11:29.142 dsync - use synchronized I/O for data 00:11:29.142 noatime - do not update access time 00:11:29.142 noctty - do not assign controlling terminal from file 00:11:29.142 nofollow - do not follow symlinks 00:11:29.142 nonblock - use non-blocking I/O 00:11:29.142 sync - use synchronized I/O for data and metadata 00:11:29.142 09:22:06 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # es=2 00:11:29.142 09:22:06 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:29.142 09:22:06 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:29.142 09:22:06 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:29.142 00:11:29.142 real 0m0.074s 00:11:29.142 user 0m0.038s 00:11:29.142 sys 0m0.035s 00:11:29.142 09:22:06 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:29.142 09:22:06 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:11:29.142 ************************************ 00:11:29.142 END TEST dd_invalid_arguments 00:11:29.142 ************************************ 00:11:29.400 09:22:06 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:11:29.400 09:22:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:29.400 09:22:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.400 09:22:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:29.400 ************************************ 00:11:29.400 START TEST dd_double_input 00:11:29.400 ************************************ 00:11:29.400 09:22:06 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1129 -- # double_input 00:11:29.400 09:22:06 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:11:29.400 09:22:06 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # local es=0 00:11:29.400 09:22:06 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:11:29.400 09:22:06 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.400 09:22:06 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:29.400 09:22:06 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.400 09:22:06 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:29.400 09:22:06 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.400 09:22:06 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:29.400 09:22:06 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.400 09:22:06 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:29.400 09:22:06 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:11:29.400 [2024-12-09 09:22:06.965669] spdk_dd.c:1485:main: *ERROR*: You may specify either --if or --ib, but not both. 00:11:29.400 09:22:06 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # es=22 00:11:29.400 09:22:06 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:29.400 09:22:06 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:29.400 09:22:06 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:29.400 00:11:29.400 real 0m0.072s 00:11:29.400 user 0m0.038s 00:11:29.400 sys 0m0.033s 00:11:29.400 09:22:06 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:29.400 09:22:06 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:11:29.400 ************************************ 00:11:29.400 END TEST dd_double_input 00:11:29.400 ************************************ 00:11:29.400 09:22:07 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:11:29.400 09:22:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:29.400 09:22:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.400 09:22:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:29.400 ************************************ 00:11:29.400 START TEST dd_double_output 00:11:29.400 ************************************ 00:11:29.400 09:22:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1129 -- # double_output 00:11:29.400 09:22:07 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:11:29.400 09:22:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # local es=0 00:11:29.400 09:22:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:11:29.401 09:22:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.401 09:22:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:29.401 09:22:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.401 09:22:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:29.401 09:22:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.401 09:22:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:29.401 09:22:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.401 09:22:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:29.401 09:22:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:11:29.401 [2024-12-09 09:22:07.108835] spdk_dd.c:1491:main: *ERROR*: You may specify either --of or --ob, but not both. 00:11:29.658 09:22:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # es=22 00:11:29.658 09:22:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:29.658 09:22:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:29.658 09:22:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:29.658 00:11:29.658 real 0m0.076s 00:11:29.658 user 0m0.041s 00:11:29.658 sys 0m0.034s 00:11:29.658 09:22:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:29.658 09:22:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:11:29.658 ************************************ 00:11:29.658 END TEST dd_double_output 00:11:29.658 ************************************ 00:11:29.658 09:22:07 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:11:29.658 09:22:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:29.658 09:22:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.658 09:22:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:29.658 ************************************ 00:11:29.658 START TEST dd_no_input 00:11:29.658 ************************************ 00:11:29.658 09:22:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1129 -- # no_input 00:11:29.658 09:22:07 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:11:29.658 09:22:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # local es=0 00:11:29.658 09:22:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:11:29.658 09:22:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.658 09:22:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:29.658 09:22:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.658 09:22:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:29.658 09:22:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.658 09:22:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:29.658 09:22:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.658 09:22:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:29.658 09:22:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:11:29.659 [2024-12-09 09:22:07.244171] spdk_dd.c:1497:main: *ERROR*: You must specify either --if or --ib 00:11:29.659 09:22:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # es=22 00:11:29.659 09:22:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:29.659 09:22:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:29.659 09:22:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:29.659 00:11:29.659 real 0m0.070s 00:11:29.659 user 0m0.039s 00:11:29.659 sys 0m0.030s 00:11:29.659 09:22:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:29.659 09:22:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:11:29.659 ************************************ 00:11:29.659 END TEST dd_no_input 00:11:29.659 ************************************ 00:11:29.659 09:22:07 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:11:29.659 09:22:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:29.659 09:22:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.659 09:22:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:29.659 ************************************ 00:11:29.659 START TEST dd_no_output 00:11:29.659 ************************************ 00:11:29.659 09:22:07 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1129 -- # no_output 00:11:29.659 09:22:07 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:29.659 09:22:07 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # local es=0 00:11:29.659 09:22:07 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:29.659 09:22:07 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.659 09:22:07 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:29.659 09:22:07 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.659 09:22:07 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:29.659 09:22:07 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.659 09:22:07 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:29.659 09:22:07 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.659 09:22:07 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:29.659 09:22:07 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:29.917 [2024-12-09 09:22:07.382310] spdk_dd.c:1503:main: *ERROR*: You must specify either --of or --ob 00:11:29.917 09:22:07 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # es=22 00:11:29.917 09:22:07 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:29.917 09:22:07 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:29.917 09:22:07 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:29.917 00:11:29.917 real 0m0.075s 00:11:29.917 user 0m0.037s 00:11:29.917 sys 0m0.037s 00:11:29.917 09:22:07 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:29.917 09:22:07 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:11:29.917 ************************************ 00:11:29.917 END TEST dd_no_output 00:11:29.917 ************************************ 00:11:29.917 09:22:07 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:11:29.917 09:22:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:29.917 09:22:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.917 09:22:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:29.917 ************************************ 00:11:29.917 START TEST dd_wrong_blocksize 00:11:29.917 ************************************ 00:11:29.917 09:22:07 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1129 -- # wrong_blocksize 00:11:29.917 09:22:07 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:11:29.917 09:22:07 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:11:29.917 09:22:07 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:11:29.917 09:22:07 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.917 09:22:07 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:29.917 09:22:07 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.917 09:22:07 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:29.917 09:22:07 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.917 09:22:07 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:29.917 09:22:07 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.917 09:22:07 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:29.917 09:22:07 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:11:29.917 [2024-12-09 09:22:07.528201] spdk_dd.c:1509:main: *ERROR*: Invalid --bs value 00:11:29.917 09:22:07 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # es=22 00:11:29.917 09:22:07 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:29.917 ************************************ 00:11:29.917 END TEST dd_wrong_blocksize 00:11:29.917 ************************************ 00:11:29.917 09:22:07 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:29.917 09:22:07 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:29.917 00:11:29.917 real 0m0.071s 00:11:29.917 user 0m0.034s 00:11:29.917 sys 0m0.036s 00:11:29.917 09:22:07 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:29.917 09:22:07 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:11:29.917 09:22:07 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:11:29.917 09:22:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:29.917 09:22:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.917 09:22:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:29.917 ************************************ 00:11:29.917 START TEST dd_smaller_blocksize 00:11:29.917 ************************************ 00:11:29.917 09:22:07 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1129 -- # smaller_blocksize 00:11:29.917 09:22:07 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:11:29.917 09:22:07 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:11:29.917 09:22:07 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:11:29.917 09:22:07 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.917 09:22:07 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:29.917 09:22:07 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.917 09:22:07 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:29.917 09:22:07 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.917 09:22:07 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:29.917 09:22:07 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.917 09:22:07 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:29.917 09:22:07 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:11:30.174 [2024-12-09 09:22:07.669645] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:11:30.174 [2024-12-09 09:22:07.669729] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61659 ] 00:11:30.174 [2024-12-09 09:22:07.819931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.174 [2024-12-09 09:22:07.873032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.431 [2024-12-09 09:22:07.913957] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:30.688 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:11:30.946 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:11:30.946 [2024-12-09 09:22:08.509055] spdk_dd.c:1182:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:11:30.946 [2024-12-09 09:22:08.509132] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:30.946 [2024-12-09 09:22:08.604999] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:11:30.946 09:22:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # es=244 00:11:30.946 09:22:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:30.946 09:22:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@664 -- # es=116 00:11:30.946 09:22:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@665 -- # case "$es" in 00:11:30.946 09:22:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@672 -- # es=1 00:11:30.946 09:22:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:30.946 00:11:30.946 real 0m1.056s 00:11:30.946 user 0m0.391s 00:11:30.946 sys 0m0.558s 00:11:30.946 09:22:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:30.946 09:22:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:11:30.946 ************************************ 00:11:30.946 END TEST dd_smaller_blocksize 00:11:30.946 ************************************ 00:11:31.204 09:22:08 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:11:31.204 09:22:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:31.204 09:22:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:31.204 09:22:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:31.204 ************************************ 00:11:31.204 START TEST dd_invalid_count 00:11:31.204 ************************************ 00:11:31.204 09:22:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1129 -- # invalid_count 00:11:31.204 09:22:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:11:31.204 09:22:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # local es=0 00:11:31.204 09:22:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:11:31.204 09:22:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:31.204 09:22:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:31.204 09:22:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:31.204 09:22:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:31.204 09:22:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:31.204 09:22:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:31.204 09:22:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:31.204 09:22:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:31.204 09:22:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:11:31.204 [2024-12-09 09:22:08.791410] spdk_dd.c:1515:main: *ERROR*: Invalid --count value 00:11:31.204 09:22:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # es=22 00:11:31.204 09:22:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:31.204 09:22:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:31.204 09:22:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:31.204 00:11:31.204 real 0m0.073s 00:11:31.204 user 0m0.040s 00:11:31.204 sys 0m0.032s 00:11:31.204 09:22:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:31.204 ************************************ 00:11:31.204 END TEST dd_invalid_count 00:11:31.204 ************************************ 00:11:31.204 09:22:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:11:31.204 09:22:08 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:11:31.204 09:22:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:31.204 09:22:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:31.204 09:22:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:31.204 ************************************ 00:11:31.204 START TEST dd_invalid_oflag 00:11:31.204 ************************************ 00:11:31.204 09:22:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1129 -- # invalid_oflag 00:11:31.204 09:22:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:11:31.205 09:22:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # local es=0 00:11:31.205 09:22:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:11:31.205 09:22:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:31.205 09:22:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:31.205 09:22:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:31.205 09:22:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:31.205 09:22:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:31.205 09:22:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:31.205 09:22:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:31.205 09:22:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:31.205 09:22:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:11:31.463 [2024-12-09 09:22:08.937021] spdk_dd.c:1521:main: *ERROR*: --oflags may be used only with --of 00:11:31.463 09:22:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # es=22 00:11:31.463 09:22:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:31.463 09:22:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:31.463 09:22:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:31.463 00:11:31.463 real 0m0.071s 00:11:31.463 user 0m0.038s 00:11:31.463 sys 0m0.032s 00:11:31.463 09:22:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:31.463 09:22:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:11:31.463 ************************************ 00:11:31.463 END TEST dd_invalid_oflag 00:11:31.463 ************************************ 00:11:31.463 09:22:09 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:11:31.463 09:22:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:31.463 09:22:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:31.463 09:22:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:31.463 ************************************ 00:11:31.463 START TEST dd_invalid_iflag 00:11:31.463 ************************************ 00:11:31.463 09:22:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1129 -- # invalid_iflag 00:11:31.463 09:22:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:11:31.463 09:22:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # local es=0 00:11:31.463 09:22:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:11:31.463 09:22:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:31.463 09:22:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:31.463 09:22:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:31.463 09:22:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:31.463 09:22:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:31.463 09:22:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:31.463 09:22:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:31.463 09:22:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:31.463 09:22:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:11:31.463 [2024-12-09 09:22:09.080961] spdk_dd.c:1527:main: *ERROR*: --iflags may be used only with --if 00:11:31.463 09:22:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # es=22 00:11:31.463 09:22:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:31.463 09:22:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:31.463 09:22:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:31.463 00:11:31.463 real 0m0.071s 00:11:31.463 user 0m0.038s 00:11:31.463 sys 0m0.033s 00:11:31.463 09:22:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:31.463 09:22:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:11:31.463 ************************************ 00:11:31.463 END TEST dd_invalid_iflag 00:11:31.463 ************************************ 00:11:31.463 09:22:09 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:11:31.463 09:22:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:31.463 09:22:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:31.463 09:22:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:31.463 ************************************ 00:11:31.463 START TEST dd_unknown_flag 00:11:31.463 ************************************ 00:11:31.463 09:22:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1129 -- # unknown_flag 00:11:31.463 09:22:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:11:31.463 09:22:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # local es=0 00:11:31.463 09:22:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:11:31.463 09:22:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:31.463 09:22:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:31.463 09:22:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:31.463 09:22:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:31.463 09:22:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:31.463 09:22:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:31.463 09:22:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:31.464 09:22:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:31.464 09:22:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:11:31.722 [2024-12-09 09:22:09.224767] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:11:31.722 [2024-12-09 09:22:09.224831] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61751 ] 00:11:31.722 [2024-12-09 09:22:09.378728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.722 [2024-12-09 09:22:09.430612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.980 [2024-12-09 09:22:09.471531] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:31.980 [2024-12-09 09:22:09.502264] spdk_dd.c: 984:parse_flags: *ERROR*: Unknown file flag: -1 00:11:31.980 [2024-12-09 09:22:09.502328] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:31.980 [2024-12-09 09:22:09.502404] spdk_dd.c: 984:parse_flags: *ERROR*: Unknown file flag: -1 00:11:31.980 [2024-12-09 09:22:09.502422] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:31.980 [2024-12-09 09:22:09.502676] spdk_dd.c:1216:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:11:31.980 [2024-12-09 09:22:09.502703] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:31.980 [2024-12-09 09:22:09.502766] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:11:31.980 [2024-12-09 09:22:09.502785] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:11:31.980 [2024-12-09 09:22:09.598060] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:11:31.980 09:22:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # es=234 00:11:31.980 09:22:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:31.980 09:22:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@664 -- # es=106 00:11:31.980 09:22:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@665 -- # case "$es" in 00:11:31.980 09:22:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@672 -- # es=1 00:11:31.980 09:22:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:31.980 00:11:31.981 real 0m0.491s 00:11:31.981 user 0m0.258s 00:11:31.981 sys 0m0.141s 00:11:31.981 09:22:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:31.981 ************************************ 00:11:31.981 END TEST dd_unknown_flag 00:11:31.981 ************************************ 00:11:31.981 09:22:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:11:32.240 09:22:09 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:11:32.240 09:22:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:32.240 09:22:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:32.240 09:22:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:32.240 ************************************ 00:11:32.240 START TEST dd_invalid_json 00:11:32.240 ************************************ 00:11:32.240 09:22:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1129 -- # invalid_json 00:11:32.240 09:22:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:11:32.240 09:22:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:11:32.240 09:22:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # local es=0 00:11:32.240 09:22:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:11:32.240 09:22:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:32.240 09:22:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:32.240 09:22:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:32.240 09:22:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:32.240 09:22:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:32.240 09:22:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:32.240 09:22:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:32.240 09:22:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:32.240 09:22:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:11:32.240 [2024-12-09 09:22:09.780006] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:11:32.240 [2024-12-09 09:22:09.780113] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61785 ] 00:11:32.240 [2024-12-09 09:22:09.938164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.500 [2024-12-09 09:22:09.989681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.500 [2024-12-09 09:22:09.989749] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:11:32.500 [2024-12-09 09:22:09.989766] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:32.500 [2024-12-09 09:22:09.989774] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:32.500 [2024-12-09 09:22:09.989805] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:11:32.500 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # es=234 00:11:32.500 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:32.500 ************************************ 00:11:32.500 END TEST dd_invalid_json 00:11:32.500 ************************************ 00:11:32.500 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@664 -- # es=106 00:11:32.500 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@665 -- # case "$es" in 00:11:32.500 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@672 -- # es=1 00:11:32.500 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:32.500 00:11:32.500 real 0m0.325s 00:11:32.500 user 0m0.159s 00:11:32.500 sys 0m0.064s 00:11:32.500 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:32.500 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:11:32.500 09:22:10 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:11:32.500 09:22:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:32.500 09:22:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:32.501 09:22:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:32.501 ************************************ 00:11:32.501 START TEST dd_invalid_seek 00:11:32.501 ************************************ 00:11:32.501 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1129 -- # invalid_seek 00:11:32.501 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:11:32.501 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:11:32.501 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:11:32.501 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:11:32.501 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:11:32.501 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:11:32.501 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:11:32.501 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:11:32.501 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # local es=0 00:11:32.501 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:11:32.501 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:11:32.501 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:32.501 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:11:32.501 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:32.501 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:32.501 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:32.501 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:32.501 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:32.501 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:32.501 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:32.501 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:11:32.501 [2024-12-09 09:22:10.182089] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:11:32.501 [2024-12-09 09:22:10.182186] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61809 ] 00:11:32.501 { 00:11:32.501 "subsystems": [ 00:11:32.501 { 00:11:32.501 "subsystem": "bdev", 00:11:32.501 "config": [ 00:11:32.501 { 00:11:32.501 "params": { 00:11:32.501 "block_size": 512, 00:11:32.501 "num_blocks": 512, 00:11:32.501 "name": "malloc0" 00:11:32.501 }, 00:11:32.501 "method": "bdev_malloc_create" 00:11:32.501 }, 00:11:32.501 { 00:11:32.501 "params": { 00:11:32.501 "block_size": 512, 00:11:32.501 "num_blocks": 512, 00:11:32.501 "name": "malloc1" 00:11:32.501 }, 00:11:32.501 "method": "bdev_malloc_create" 00:11:32.501 }, 00:11:32.501 { 00:11:32.501 "method": "bdev_wait_for_examine" 00:11:32.501 } 00:11:32.501 ] 00:11:32.501 } 00:11:32.501 ] 00:11:32.501 } 00:11:32.793 [2024-12-09 09:22:10.336492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.793 [2024-12-09 09:22:10.387931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.793 [2024-12-09 09:22:10.429483] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:32.793 [2024-12-09 09:22:10.486679] spdk_dd.c:1143:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:11:32.793 [2024-12-09 09:22:10.486968] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:33.068 [2024-12-09 09:22:10.584830] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:11:33.068 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # es=228 00:11:33.068 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:33.068 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@664 -- # es=100 00:11:33.068 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@665 -- # case "$es" in 00:11:33.068 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@672 -- # es=1 00:11:33.068 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:33.068 00:11:33.068 real 0m0.521s 00:11:33.068 user 0m0.332s 00:11:33.068 sys 0m0.153s 00:11:33.068 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:33.068 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:11:33.068 ************************************ 00:11:33.068 END TEST dd_invalid_seek 00:11:33.068 ************************************ 00:11:33.068 09:22:10 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:11:33.068 09:22:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:33.068 09:22:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:33.068 09:22:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:33.068 ************************************ 00:11:33.068 START TEST dd_invalid_skip 00:11:33.068 ************************************ 00:11:33.068 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1129 -- # invalid_skip 00:11:33.068 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:11:33.068 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:11:33.068 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:11:33.068 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:11:33.068 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:11:33.068 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:11:33.068 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:11:33.068 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:11:33.068 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:11:33.068 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:11:33.068 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # local es=0 00:11:33.068 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:11:33.068 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:33.068 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:33.068 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:33.068 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:33.068 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:33.068 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:33.068 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:33.068 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:33.068 09:22:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:11:33.068 { 00:11:33.068 "subsystems": [ 00:11:33.068 { 00:11:33.068 "subsystem": "bdev", 00:11:33.068 "config": [ 00:11:33.068 { 00:11:33.068 "params": { 00:11:33.068 "block_size": 512, 00:11:33.068 "num_blocks": 512, 00:11:33.068 "name": "malloc0" 00:11:33.068 }, 00:11:33.068 "method": "bdev_malloc_create" 00:11:33.068 }, 00:11:33.068 { 00:11:33.068 "params": { 00:11:33.068 "block_size": 512, 00:11:33.068 "num_blocks": 512, 00:11:33.068 "name": "malloc1" 00:11:33.068 }, 00:11:33.068 "method": "bdev_malloc_create" 00:11:33.068 }, 00:11:33.068 { 00:11:33.068 "method": "bdev_wait_for_examine" 00:11:33.068 } 00:11:33.068 ] 00:11:33.068 } 00:11:33.068 ] 00:11:33.068 } 00:11:33.068 [2024-12-09 09:22:10.779500] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:11:33.068 [2024-12-09 09:22:10.779762] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61850 ] 00:11:33.326 [2024-12-09 09:22:10.933924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:33.326 [2024-12-09 09:22:10.985750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.326 [2024-12-09 09:22:11.027897] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:33.585 [2024-12-09 09:22:11.084398] spdk_dd.c:1100:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:11:33.585 [2024-12-09 09:22:11.084457] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:33.585 [2024-12-09 09:22:11.182071] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:11:33.585 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # es=228 00:11:33.585 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:33.585 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@664 -- # es=100 00:11:33.585 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@665 -- # case "$es" in 00:11:33.585 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@672 -- # es=1 00:11:33.585 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:33.585 ************************************ 00:11:33.585 END TEST dd_invalid_skip 00:11:33.585 ************************************ 00:11:33.585 00:11:33.585 real 0m0.532s 00:11:33.585 user 0m0.331s 00:11:33.585 sys 0m0.155s 00:11:33.585 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:33.585 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:11:33.585 09:22:11 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:11:33.585 09:22:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:33.585 09:22:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:33.585 09:22:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:33.845 ************************************ 00:11:33.845 START TEST dd_invalid_input_count 00:11:33.845 ************************************ 00:11:33.845 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1129 -- # invalid_input_count 00:11:33.845 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:11:33.845 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:11:33.845 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:11:33.845 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:11:33.845 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:11:33.845 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:11:33.845 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:11:33.845 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # local es=0 00:11:33.845 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:11:33.845 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:33.845 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:11:33.845 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:11:33.845 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:33.845 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:33.845 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:11:33.845 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:33.845 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:33.845 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:33.845 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:33.845 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:33.845 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:11:33.845 [2024-12-09 09:22:11.372383] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:11:33.845 [2024-12-09 09:22:11.372734] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61882 ] 00:11:33.845 { 00:11:33.845 "subsystems": [ 00:11:33.845 { 00:11:33.845 "subsystem": "bdev", 00:11:33.845 "config": [ 00:11:33.845 { 00:11:33.845 "params": { 00:11:33.845 "block_size": 512, 00:11:33.845 "num_blocks": 512, 00:11:33.845 "name": "malloc0" 00:11:33.845 }, 00:11:33.845 "method": "bdev_malloc_create" 00:11:33.845 }, 00:11:33.845 { 00:11:33.845 "params": { 00:11:33.845 "block_size": 512, 00:11:33.845 "num_blocks": 512, 00:11:33.845 "name": "malloc1" 00:11:33.845 }, 00:11:33.845 "method": "bdev_malloc_create" 00:11:33.845 }, 00:11:33.845 { 00:11:33.845 "method": "bdev_wait_for_examine" 00:11:33.845 } 00:11:33.845 ] 00:11:33.845 } 00:11:33.845 ] 00:11:33.845 } 00:11:33.845 [2024-12-09 09:22:11.532284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.103 [2024-12-09 09:22:11.584143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.103 [2024-12-09 09:22:11.625938] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:34.103 [2024-12-09 09:22:11.683277] spdk_dd.c:1108:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:11:34.103 [2024-12-09 09:22:11.683338] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:34.103 [2024-12-09 09:22:11.781248] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:11:34.362 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # es=228 00:11:34.362 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:34.362 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@664 -- # es=100 00:11:34.362 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@665 -- # case "$es" in 00:11:34.362 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@672 -- # es=1 00:11:34.362 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:34.362 00:11:34.362 real 0m0.519s 00:11:34.362 user 0m0.330s 00:11:34.362 sys 0m0.151s 00:11:34.362 ************************************ 00:11:34.362 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:34.362 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:11:34.362 END TEST dd_invalid_input_count 00:11:34.362 ************************************ 00:11:34.362 09:22:11 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:11:34.362 09:22:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:34.362 09:22:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:34.362 09:22:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:34.362 ************************************ 00:11:34.362 START TEST dd_invalid_output_count 00:11:34.362 ************************************ 00:11:34.362 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1129 -- # invalid_output_count 00:11:34.362 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:11:34.362 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:11:34.362 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:11:34.362 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:11:34.362 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:11:34.362 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # local es=0 00:11:34.362 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:11:34.362 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:11:34.362 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:34.362 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:11:34.362 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:34.362 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:34.362 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:34.362 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:34.362 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:34.362 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:34.362 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:34.362 09:22:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:11:34.362 { 00:11:34.362 "subsystems": [ 00:11:34.362 { 00:11:34.362 "subsystem": "bdev", 00:11:34.362 "config": [ 00:11:34.362 { 00:11:34.362 "params": { 00:11:34.362 "block_size": 512, 00:11:34.362 "num_blocks": 512, 00:11:34.362 "name": "malloc0" 00:11:34.362 }, 00:11:34.362 "method": "bdev_malloc_create" 00:11:34.362 }, 00:11:34.362 { 00:11:34.362 "method": "bdev_wait_for_examine" 00:11:34.362 } 00:11:34.362 ] 00:11:34.362 } 00:11:34.362 ] 00:11:34.362 } 00:11:34.362 [2024-12-09 09:22:11.963767] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:11:34.362 [2024-12-09 09:22:11.963846] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61915 ] 00:11:34.620 [2024-12-09 09:22:12.112864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.621 [2024-12-09 09:22:12.164741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.621 [2024-12-09 09:22:12.206155] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:34.621 [2024-12-09 09:22:12.254818] spdk_dd.c:1150:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:11:34.621 [2024-12-09 09:22:12.254885] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:34.879 [2024-12-09 09:22:12.353391] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:11:34.879 09:22:12 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # es=228 00:11:34.879 09:22:12 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:34.879 09:22:12 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@664 -- # es=100 00:11:34.879 09:22:12 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@665 -- # case "$es" in 00:11:34.879 09:22:12 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@672 -- # es=1 00:11:34.879 09:22:12 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:34.879 00:11:34.879 real 0m0.514s 00:11:34.879 user 0m0.321s 00:11:34.879 sys 0m0.148s 00:11:34.879 09:22:12 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:34.879 ************************************ 00:11:34.879 END TEST dd_invalid_output_count 00:11:34.879 ************************************ 00:11:34.879 09:22:12 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:11:34.879 09:22:12 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:11:34.879 09:22:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:34.879 09:22:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:34.879 09:22:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:34.879 ************************************ 00:11:34.879 START TEST dd_bs_not_multiple 00:11:34.879 ************************************ 00:11:34.879 09:22:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1129 -- # bs_not_multiple 00:11:34.879 09:22:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:11:34.879 09:22:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:11:34.879 09:22:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:11:34.879 09:22:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:11:34.879 09:22:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:11:34.879 09:22:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:11:34.879 09:22:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:11:34.879 09:22:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:11:34.879 09:22:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # local es=0 00:11:34.879 09:22:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:11:34.879 09:22:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:11:34.879 09:22:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:34.879 09:22:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:11:34.879 09:22:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:34.879 09:22:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:34.879 09:22:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:34.879 09:22:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:34.879 09:22:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:34.879 09:22:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:34.879 09:22:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:34.879 09:22:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:11:34.879 { 00:11:34.879 "subsystems": [ 00:11:34.879 { 00:11:34.879 "subsystem": "bdev", 00:11:34.879 "config": [ 00:11:34.879 { 00:11:34.879 "params": { 00:11:34.879 "block_size": 512, 00:11:34.879 "num_blocks": 512, 00:11:34.879 "name": "malloc0" 00:11:34.879 }, 00:11:34.879 "method": "bdev_malloc_create" 00:11:34.879 }, 00:11:34.879 { 00:11:34.879 "params": { 00:11:34.879 "block_size": 512, 00:11:34.879 "num_blocks": 512, 00:11:34.879 "name": "malloc1" 00:11:34.879 }, 00:11:34.879 "method": "bdev_malloc_create" 00:11:34.879 }, 00:11:34.879 { 00:11:34.879 "method": "bdev_wait_for_examine" 00:11:34.879 } 00:11:34.879 ] 00:11:34.879 } 00:11:34.879 ] 00:11:34.879 } 00:11:34.879 [2024-12-09 09:22:12.553466] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:11:34.879 [2024-12-09 09:22:12.553629] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61952 ] 00:11:35.137 [2024-12-09 09:22:12.711766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.137 [2024-12-09 09:22:12.765419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.137 [2024-12-09 09:22:12.807382] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:35.395 [2024-12-09 09:22:12.864197] spdk_dd.c:1166:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:11:35.396 [2024-12-09 09:22:12.864263] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:35.396 [2024-12-09 09:22:12.963228] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:11:35.396 09:22:13 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # es=234 00:11:35.396 09:22:13 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:35.396 09:22:13 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@664 -- # es=106 00:11:35.396 09:22:13 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@665 -- # case "$es" in 00:11:35.396 09:22:13 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@672 -- # es=1 00:11:35.396 09:22:13 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:35.396 00:11:35.396 real 0m0.538s 00:11:35.396 user 0m0.323s 00:11:35.396 sys 0m0.170s 00:11:35.396 09:22:13 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.396 09:22:13 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:11:35.396 ************************************ 00:11:35.396 END TEST dd_bs_not_multiple 00:11:35.396 ************************************ 00:11:35.396 00:11:35.396 real 0m6.551s 00:11:35.396 user 0m3.275s 00:11:35.396 sys 0m2.733s 00:11:35.396 09:22:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.396 09:22:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:35.396 ************************************ 00:11:35.396 END TEST spdk_dd_negative 00:11:35.396 ************************************ 00:11:35.654 00:11:35.654 real 1m11.905s 00:11:35.654 user 0m43.900s 00:11:35.654 sys 0m32.836s 00:11:35.654 09:22:13 spdk_dd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.654 09:22:13 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:11:35.654 ************************************ 00:11:35.654 END TEST spdk_dd 00:11:35.654 ************************************ 00:11:35.654 09:22:13 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:11:35.654 09:22:13 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:11:35.654 09:22:13 -- spdk/autotest.sh@260 -- # timing_exit lib 00:11:35.654 09:22:13 -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:35.654 09:22:13 -- common/autotest_common.sh@10 -- # set +x 00:11:35.654 09:22:13 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:11:35.654 09:22:13 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:11:35.654 09:22:13 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:11:35.654 09:22:13 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:11:35.654 09:22:13 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:11:35.654 09:22:13 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:11:35.654 09:22:13 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:11:35.654 09:22:13 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:35.654 09:22:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:35.654 09:22:13 -- common/autotest_common.sh@10 -- # set +x 00:11:35.654 ************************************ 00:11:35.654 START TEST nvmf_tcp 00:11:35.654 ************************************ 00:11:35.654 09:22:13 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:11:35.654 * Looking for test storage... 00:11:35.654 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:11:35.654 09:22:13 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:35.654 09:22:13 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:11:35.654 09:22:13 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:35.913 09:22:13 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:35.913 09:22:13 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:35.913 09:22:13 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:35.913 09:22:13 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:35.913 09:22:13 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:11:35.913 09:22:13 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:11:35.913 09:22:13 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:11:35.913 09:22:13 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:11:35.913 09:22:13 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:11:35.913 09:22:13 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:11:35.913 09:22:13 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:11:35.913 09:22:13 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:35.913 09:22:13 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:11:35.913 09:22:13 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:11:35.913 09:22:13 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:35.913 09:22:13 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:35.913 09:22:13 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:11:35.913 09:22:13 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:11:35.913 09:22:13 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:35.913 09:22:13 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:11:35.913 09:22:13 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:11:35.913 09:22:13 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:11:35.913 09:22:13 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:11:35.913 09:22:13 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:35.913 09:22:13 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:11:35.913 09:22:13 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:11:35.913 09:22:13 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:35.913 09:22:13 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:35.913 09:22:13 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:11:35.913 09:22:13 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:35.913 09:22:13 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:35.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.913 --rc genhtml_branch_coverage=1 00:11:35.913 --rc genhtml_function_coverage=1 00:11:35.913 --rc genhtml_legend=1 00:11:35.913 --rc geninfo_all_blocks=1 00:11:35.913 --rc geninfo_unexecuted_blocks=1 00:11:35.913 00:11:35.913 ' 00:11:35.913 09:22:13 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:35.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.913 --rc genhtml_branch_coverage=1 00:11:35.913 --rc genhtml_function_coverage=1 00:11:35.913 --rc genhtml_legend=1 00:11:35.913 --rc geninfo_all_blocks=1 00:11:35.913 --rc geninfo_unexecuted_blocks=1 00:11:35.913 00:11:35.913 ' 00:11:35.913 09:22:13 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:35.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.913 --rc genhtml_branch_coverage=1 00:11:35.913 --rc genhtml_function_coverage=1 00:11:35.913 --rc genhtml_legend=1 00:11:35.913 --rc geninfo_all_blocks=1 00:11:35.913 --rc geninfo_unexecuted_blocks=1 00:11:35.913 00:11:35.913 ' 00:11:35.913 09:22:13 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:35.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.913 --rc genhtml_branch_coverage=1 00:11:35.913 --rc genhtml_function_coverage=1 00:11:35.913 --rc genhtml_legend=1 00:11:35.913 --rc geninfo_all_blocks=1 00:11:35.913 --rc geninfo_unexecuted_blocks=1 00:11:35.913 00:11:35.913 ' 00:11:35.913 09:22:13 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:11:35.913 09:22:13 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:11:35.913 09:22:13 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:11:35.913 09:22:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:35.913 09:22:13 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:35.913 09:22:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:35.913 ************************************ 00:11:35.913 START TEST nvmf_target_core 00:11:35.913 ************************************ 00:11:35.913 09:22:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:11:35.913 * Looking for test storage... 00:11:35.913 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:11:35.913 09:22:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:35.913 09:22:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:11:35.913 09:22:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:36.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.174 --rc genhtml_branch_coverage=1 00:11:36.174 --rc genhtml_function_coverage=1 00:11:36.174 --rc genhtml_legend=1 00:11:36.174 --rc geninfo_all_blocks=1 00:11:36.174 --rc geninfo_unexecuted_blocks=1 00:11:36.174 00:11:36.174 ' 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:36.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.174 --rc genhtml_branch_coverage=1 00:11:36.174 --rc genhtml_function_coverage=1 00:11:36.174 --rc genhtml_legend=1 00:11:36.174 --rc geninfo_all_blocks=1 00:11:36.174 --rc geninfo_unexecuted_blocks=1 00:11:36.174 00:11:36.174 ' 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:36.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.174 --rc genhtml_branch_coverage=1 00:11:36.174 --rc genhtml_function_coverage=1 00:11:36.174 --rc genhtml_legend=1 00:11:36.174 --rc geninfo_all_blocks=1 00:11:36.174 --rc geninfo_unexecuted_blocks=1 00:11:36.174 00:11:36.174 ' 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:36.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.174 --rc genhtml_branch_coverage=1 00:11:36.174 --rc genhtml_function_coverage=1 00:11:36.174 --rc genhtml_legend=1 00:11:36.174 --rc geninfo_all_blocks=1 00:11:36.174 --rc geninfo_unexecuted_blocks=1 00:11:36.174 00:11:36.174 ' 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.174 09:22:13 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:11:36.175 09:22:13 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.175 09:22:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:11:36.175 09:22:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:36.175 09:22:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:36.175 09:22:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:36.175 09:22:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:36.175 09:22:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:36.175 09:22:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:36.175 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:36.175 09:22:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:36.175 09:22:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:36.175 09:22:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:36.175 09:22:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:36.175 09:22:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:11:36.175 09:22:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:11:36.175 09:22:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:36.175 09:22:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:36.175 09:22:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:36.175 09:22:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:36.175 ************************************ 00:11:36.175 START TEST nvmf_host_management 00:11:36.175 ************************************ 00:11:36.175 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:36.175 * Looking for test storage... 00:11:36.175 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:36.175 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:36.175 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:36.175 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:11:36.435 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:36.435 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:36.435 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:36.435 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:36.435 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:11:36.435 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:11:36.435 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:11:36.435 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:11:36.435 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:11:36.435 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:11:36.435 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:11:36.435 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:36.435 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:11:36.435 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:11:36.435 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:36.435 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:36.435 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:11:36.435 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:11:36.435 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:36.435 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:11:36.435 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:11:36.435 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:11:36.435 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:11:36.435 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:36.435 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:11:36.435 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:11:36.435 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:36.435 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:36.435 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:11:36.435 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:36.435 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:36.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.435 --rc genhtml_branch_coverage=1 00:11:36.435 --rc genhtml_function_coverage=1 00:11:36.435 --rc genhtml_legend=1 00:11:36.435 --rc geninfo_all_blocks=1 00:11:36.435 --rc geninfo_unexecuted_blocks=1 00:11:36.435 00:11:36.435 ' 00:11:36.435 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:36.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.435 --rc genhtml_branch_coverage=1 00:11:36.435 --rc genhtml_function_coverage=1 00:11:36.435 --rc genhtml_legend=1 00:11:36.435 --rc geninfo_all_blocks=1 00:11:36.435 --rc geninfo_unexecuted_blocks=1 00:11:36.435 00:11:36.435 ' 00:11:36.435 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:36.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.435 --rc genhtml_branch_coverage=1 00:11:36.435 --rc genhtml_function_coverage=1 00:11:36.435 --rc genhtml_legend=1 00:11:36.435 --rc geninfo_all_blocks=1 00:11:36.435 --rc geninfo_unexecuted_blocks=1 00:11:36.435 00:11:36.435 ' 00:11:36.436 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:36.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.436 --rc genhtml_branch_coverage=1 00:11:36.436 --rc genhtml_function_coverage=1 00:11:36.436 --rc genhtml_legend=1 00:11:36.436 --rc geninfo_all_blocks=1 00:11:36.436 --rc geninfo_unexecuted_blocks=1 00:11:36.436 00:11:36.436 ' 00:11:36.436 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:36.436 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:11:36.436 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:36.436 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:36.436 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:36.436 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:36.436 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:36.436 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:36.436 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:36.436 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:36.436 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:36.436 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:36.436 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:11:36.436 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:11:36.436 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:36.436 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:36.436 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:36.436 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:36.436 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:36.436 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:11:36.436 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:36.436 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:36.436 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:36.436 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.436 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.436 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.436 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:11:36.436 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.436 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:11:36.436 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:36.436 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:36.436 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:36.436 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:36.436 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:36.436 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:36.436 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:36.436 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:36.436 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:36.436 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:36.436 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:36.436 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:36.436 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:11:36.436 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:36.436 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:36.436 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:36.436 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:36.436 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:36.436 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:36.436 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:36.436 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:36.436 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:36.436 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:36.436 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:36.436 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:36.436 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:36.436 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:36.436 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:36.436 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:36.436 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:36.436 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:36.436 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:36.436 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:36.437 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:36.437 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:36.437 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:36.437 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:36.437 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:36.437 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:36.437 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:36.437 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:36.437 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:36.437 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:36.437 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:36.437 Cannot find device "nvmf_init_br" 00:11:36.437 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:11:36.437 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:36.437 Cannot find device "nvmf_init_br2" 00:11:36.437 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:11:36.437 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:36.437 Cannot find device "nvmf_tgt_br" 00:11:36.437 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:11:36.437 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:36.437 Cannot find device "nvmf_tgt_br2" 00:11:36.437 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:11:36.437 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:36.437 Cannot find device "nvmf_init_br" 00:11:36.437 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:11:36.437 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:36.437 Cannot find device "nvmf_init_br2" 00:11:36.437 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:11:36.437 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:36.437 Cannot find device "nvmf_tgt_br" 00:11:36.437 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:11:36.437 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:36.695 Cannot find device "nvmf_tgt_br2" 00:11:36.695 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:11:36.695 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:36.695 Cannot find device "nvmf_br" 00:11:36.695 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:11:36.695 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:36.695 Cannot find device "nvmf_init_if" 00:11:36.695 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:11:36.695 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:36.695 Cannot find device "nvmf_init_if2" 00:11:36.695 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:11:36.695 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:36.695 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:36.695 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:11:36.695 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:36.695 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:36.695 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:11:36.695 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:36.695 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:36.695 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:36.695 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:36.695 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:36.695 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:36.695 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:36.695 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:36.695 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:36.695 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:36.695 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:36.695 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:36.695 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:36.695 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:36.695 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:36.695 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:36.954 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:36.954 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:36.954 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:36.954 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:36.954 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:36.954 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:36.954 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:36.954 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:36.954 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:36.954 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:36.954 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:36.954 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:36.954 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:36.954 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:36.954 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:36.954 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:37.212 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:37.212 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:37.212 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.158 ms 00:11:37.212 00:11:37.212 --- 10.0.0.3 ping statistics --- 00:11:37.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.212 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:11:37.212 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:37.212 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:37.212 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.089 ms 00:11:37.212 00:11:37.212 --- 10.0.0.4 ping statistics --- 00:11:37.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.212 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:11:37.212 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:37.212 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:37.212 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:11:37.212 00:11:37.212 --- 10.0.0.1 ping statistics --- 00:11:37.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.212 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:11:37.212 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:37.212 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:37.212 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:11:37.212 00:11:37.212 --- 10.0.0.2 ping statistics --- 00:11:37.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.212 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:11:37.212 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:37.212 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:11:37.212 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:37.212 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:37.212 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:37.212 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:37.212 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:37.212 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:37.212 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:37.212 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:11:37.212 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:11:37.212 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:11:37.212 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:37.212 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:37.212 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:37.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:37.212 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=62291 00:11:37.212 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 62291 00:11:37.212 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62291 ']' 00:11:37.212 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:37.212 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:37.212 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:37.212 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:37.212 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:37.212 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:11:37.212 [2024-12-09 09:22:14.818112] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:11:37.212 [2024-12-09 09:22:14.818209] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:37.469 [2024-12-09 09:22:14.973502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:37.469 [2024-12-09 09:22:15.030530] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:37.469 [2024-12-09 09:22:15.030591] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:37.469 [2024-12-09 09:22:15.030603] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:37.469 [2024-12-09 09:22:15.030611] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:37.469 [2024-12-09 09:22:15.030618] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:37.469 [2024-12-09 09:22:15.031557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:37.469 [2024-12-09 09:22:15.032620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:37.469 [2024-12-09 09:22:15.032810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:37.469 [2024-12-09 09:22:15.032810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:37.469 [2024-12-09 09:22:15.077379] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:38.033 09:22:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:38.033 09:22:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:11:38.033 09:22:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:38.033 09:22:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:38.033 09:22:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:38.290 09:22:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:38.291 09:22:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:38.291 09:22:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.291 09:22:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:38.291 [2024-12-09 09:22:15.766479] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:38.291 09:22:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.291 09:22:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:11:38.291 09:22:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:38.291 09:22:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:38.291 09:22:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:11:38.291 09:22:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:11:38.291 09:22:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:11:38.291 09:22:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.291 09:22:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:38.291 Malloc0 00:11:38.291 [2024-12-09 09:22:15.863150] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:38.291 09:22:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.291 09:22:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:11:38.291 09:22:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:38.291 09:22:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:38.291 09:22:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=62345 00:11:38.291 09:22:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 62345 /var/tmp/bdevperf.sock 00:11:38.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:38.291 09:22:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62345 ']' 00:11:38.291 09:22:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:11:38.291 09:22:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:11:38.291 09:22:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:38.291 09:22:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:38.291 09:22:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:11:38.291 09:22:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:38.291 09:22:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:11:38.291 09:22:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:38.291 09:22:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:38.291 09:22:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:38.291 09:22:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:38.291 { 00:11:38.291 "params": { 00:11:38.291 "name": "Nvme$subsystem", 00:11:38.291 "trtype": "$TEST_TRANSPORT", 00:11:38.291 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:38.291 "adrfam": "ipv4", 00:11:38.291 "trsvcid": "$NVMF_PORT", 00:11:38.291 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:38.291 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:38.291 "hdgst": ${hdgst:-false}, 00:11:38.291 "ddgst": ${ddgst:-false} 00:11:38.291 }, 00:11:38.291 "method": "bdev_nvme_attach_controller" 00:11:38.291 } 00:11:38.291 EOF 00:11:38.291 )") 00:11:38.291 09:22:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:11:38.291 09:22:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:11:38.291 09:22:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:11:38.291 09:22:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:38.291 "params": { 00:11:38.291 "name": "Nvme0", 00:11:38.291 "trtype": "tcp", 00:11:38.291 "traddr": "10.0.0.3", 00:11:38.291 "adrfam": "ipv4", 00:11:38.291 "trsvcid": "4420", 00:11:38.291 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:38.291 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:38.291 "hdgst": false, 00:11:38.291 "ddgst": false 00:11:38.291 }, 00:11:38.291 "method": "bdev_nvme_attach_controller" 00:11:38.291 }' 00:11:38.291 [2024-12-09 09:22:15.987269] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:11:38.291 [2024-12-09 09:22:15.987531] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62345 ] 00:11:38.549 [2024-12-09 09:22:16.125224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.549 [2024-12-09 09:22:16.180114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.549 [2024-12-09 09:22:16.232101] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:38.828 Running I/O for 10 seconds... 00:11:39.393 09:22:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:39.393 09:22:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:11:39.393 09:22:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:11:39.393 09:22:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.393 09:22:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:39.393 09:22:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.393 09:22:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:39.393 09:22:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:11:39.393 09:22:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:11:39.393 09:22:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:11:39.393 09:22:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:11:39.393 09:22:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:11:39.393 09:22:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:11:39.393 09:22:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:39.393 09:22:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:39.393 09:22:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:39.393 09:22:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.393 09:22:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:39.393 09:22:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.393 09:22:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1283 00:11:39.393 09:22:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1283 -ge 100 ']' 00:11:39.393 09:22:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:11:39.393 09:22:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:11:39.393 09:22:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:11:39.393 09:22:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:39.393 09:22:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.393 09:22:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:39.393 09:22:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.393 [2024-12-09 09:22:17.084807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40960 len:12 09:22:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:39.393 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.393 [2024-12-09 09:22:17.085097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.393 09:22:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.393 [2024-12-09 09:22:17.085213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.393 [2024-12-09 09:22:17.085227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.393 [2024-12-09 09:22:17.085239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.393 [2024-12-09 09:22:17.085248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.393 [2024-12-09 09:22:17.085259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.393 [2024-12-09 09:22:17.085269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.393 [2024-12-09 09:22:17.085280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.393 [2024-12-09 09:22:17.085290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.393 [2024-12-09 09:22:17.085301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.393 [2024-12-09 09:22:17.085310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.393 [2024-12-09 09:22:17.085321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.393 [2024-12-09 09:22:17.085330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.393 [2024-12-09 09:22:17.085341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.393 [2024-12-09 09:22:17.085350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.393 [2024-12-09 09:22:17.085361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.393 [2024-12-09 09:22:17.085370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.393 [2024-12-09 09:22:17.085380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.393 [2024-12-09 09:22:17.085390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.393 [2024-12-09 09:22:17.085400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.393 [2024-12-09 09:22:17.085409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.393 [2024-12-09 09:22:17.085420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.393 [2024-12-09 09:22:17.085429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.393 [2024-12-09 09:22:17.085440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.393 [2024-12-09 09:22:17.085449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.394 [2024-12-09 09:22:17.085471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.394 [2024-12-09 09:22:17.085481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.394 [2024-12-09 09:22:17.085492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.394 [2024-12-09 09:22:17.085506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.394 [2024-12-09 09:22:17.085517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.394 [2024-12-09 09:22:17.085527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.394 [2024-12-09 09:22:17.085538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.394 [2024-12-09 09:22:17.085547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.394 [2024-12-09 09:22:17.085558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.394 [2024-12-09 09:22:17.085567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.394 [2024-12-09 09:22:17.085578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.394 [2024-12-09 09:22:17.085588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.394 [2024-12-09 09:22:17.085598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.394 [2024-12-09 09:22:17.085608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.394 [2024-12-09 09:22:17.085618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.394 [2024-12-09 09:22:17.085628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.394 [2024-12-09 09:22:17.085638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.394 [2024-12-09 09:22:17.085647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.394 [2024-12-09 09:22:17.085658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.394 [2024-12-09 09:22:17.085667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.394 [2024-12-09 09:22:17.085677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.394 [2024-12-09 09:22:17.085686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.394 [2024-12-09 09:22:17.085697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.394 [2024-12-09 09:22:17.085706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.394 [2024-12-09 09:22:17.085717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.394 [2024-12-09 09:22:17.085726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.394 [2024-12-09 09:22:17.085737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.394 [2024-12-09 09:22:17.085746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.394 [2024-12-09 09:22:17.085757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.394 [2024-12-09 09:22:17.085766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.394 [2024-12-09 09:22:17.085776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.394 [2024-12-09 09:22:17.085785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.394 [2024-12-09 09:22:17.085796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.394 [2024-12-09 09:22:17.085805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.394 [2024-12-09 09:22:17.085816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.394 [2024-12-09 09:22:17.085826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.394 [2024-12-09 09:22:17.085837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.394 [2024-12-09 09:22:17.085846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.394 [2024-12-09 09:22:17.085856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.394 [2024-12-09 09:22:17.085866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.394 [2024-12-09 09:22:17.085877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.394 [2024-12-09 09:22:17.085886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.394 [2024-12-09 09:22:17.085897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.394 [2024-12-09 09:22:17.085906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.394 [2024-12-09 09:22:17.085916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:45440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.394 [2024-12-09 09:22:17.085926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.394 [2024-12-09 09:22:17.085936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.394 [2024-12-09 09:22:17.085945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.394 [2024-12-09 09:22:17.085956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:45696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.394 [2024-12-09 09:22:17.085965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.394 [2024-12-09 09:22:17.085975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:45824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.394 [2024-12-09 09:22:17.085985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.394 [2024-12-09 09:22:17.085995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:45952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.394 [2024-12-09 09:22:17.086004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.394 [2024-12-09 09:22:17.086015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:46080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.394 [2024-12-09 09:22:17.086024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.394 [2024-12-09 09:22:17.086034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:46208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.394 [2024-12-09 09:22:17.086043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.394 [2024-12-09 09:22:17.086054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:46336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.394 [2024-12-09 09:22:17.086063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.394 [2024-12-09 09:22:17.086073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:46464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.394 [2024-12-09 09:22:17.086082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.394 [2024-12-09 09:22:17.086093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:46592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.394 [2024-12-09 09:22:17.086103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.394 [2024-12-09 09:22:17.086113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:46720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.394 [2024-12-09 09:22:17.086137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.394 [2024-12-09 09:22:17.086147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:46848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.394 [2024-12-09 09:22:17.086157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.394 [2024-12-09 09:22:17.086167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:46976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.394 [2024-12-09 09:22:17.086176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.394 [2024-12-09 09:22:17.086186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:47104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.394 [2024-12-09 09:22:17.086194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.394 [2024-12-09 09:22:17.086206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:47232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.394 [2024-12-09 09:22:17.086214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.394 [2024-12-09 09:22:17.086224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:47360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.395 [2024-12-09 09:22:17.086233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.395 [2024-12-09 09:22:17.086243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:47488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.395 [2024-12-09 09:22:17.086252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.395 [2024-12-09 09:22:17.086262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:47616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.395 [2024-12-09 09:22:17.086270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.395 [2024-12-09 09:22:17.086280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:47744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.395 [2024-12-09 09:22:17.086289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.395 [2024-12-09 09:22:17.086299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:47872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.395 [2024-12-09 09:22:17.086308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.395 [2024-12-09 09:22:17.086318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:48000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.395 [2024-12-09 09:22:17.086326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.395 [2024-12-09 09:22:17.086354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:48128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.395 [2024-12-09 09:22:17.086363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.395 [2024-12-09 09:22:17.086385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:48256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.395 [2024-12-09 09:22:17.086394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.395 [2024-12-09 09:22:17.086405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:48384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.395 [2024-12-09 09:22:17.086432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.395 [2024-12-09 09:22:17.086443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:48512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.395 [2024-12-09 09:22:17.086453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.395 [2024-12-09 09:22:17.086464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:48640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.395 [2024-12-09 09:22:17.086473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.395 [2024-12-09 09:22:17.086493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:48768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.395 [2024-12-09 09:22:17.086502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.395 [2024-12-09 09:22:17.086513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:48896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.395 [2024-12-09 09:22:17.086524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.395 [2024-12-09 09:22:17.086535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:49024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:39.395 [2024-12-09 09:22:17.086544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.395 [2024-12-09 09:22:17.086556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a0c00 is same with the state(6) to be set 00:11:39.395 [2024-12-09 09:22:17.086780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.395 [2024-12-09 09:22:17.086795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.395 [2024-12-09 09:22:17.086806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.395 [2024-12-09 09:22:17.086815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.395 [2024-12-09 09:22:17.086825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.395 [2024-12-09 09:22:17.086835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.395 [2024-12-09 09:22:17.086845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.395 [2024-12-09 09:22:17.086854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.395 [2024-12-09 09:22:17.086864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a1ce0 is same with the state(6) to be set 00:11:39.395 task offset: 40960 on job bdev=Nvme0n1 fails 00:11:39.395 00:11:39.395 Latency(us) 00:11:39.395 [2024-12-09T09:22:17.118Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:39.395 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:39.395 Job: Nvme0n1 ended in about 0.74 seconds with error 00:11:39.395 Verification LBA range: start 0x0 length 0x400 00:11:39.395 Nvme0n1 : 0.74 1814.83 113.43 86.42 0.00 33042.52 2500.37 31583.61 00:11:39.395 [2024-12-09T09:22:17.118Z] =================================================================================================================== 00:11:39.395 [2024-12-09T09:22:17.118Z] Total : 1814.83 113.43 86.42 0.00 33042.52 2500.37 31583.61 00:11:39.395 09:22:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:39.395 [2024-12-09 09:22:17.087841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:11:39.395 [2024-12-09 09:22:17.089897] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:39.395 [2024-12-09 09:22:17.089917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a1ce0 (9): Bad file descriptor 00:11:39.395 [2024-12-09 09:22:17.093972] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:11:39.395 09:22:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.395 09:22:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:11:40.766 09:22:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 62345 00:11:40.766 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (62345) - No such process 00:11:40.766 09:22:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:11:40.766 09:22:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:11:40.766 09:22:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:11:40.766 09:22:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:11:40.766 09:22:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:11:40.766 09:22:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:11:40.766 09:22:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:40.766 09:22:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:40.766 { 00:11:40.766 "params": { 00:11:40.766 "name": "Nvme$subsystem", 00:11:40.766 "trtype": "$TEST_TRANSPORT", 00:11:40.766 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:40.766 "adrfam": "ipv4", 00:11:40.766 "trsvcid": "$NVMF_PORT", 00:11:40.766 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:40.766 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:40.766 "hdgst": ${hdgst:-false}, 00:11:40.766 "ddgst": ${ddgst:-false} 00:11:40.766 }, 00:11:40.766 "method": "bdev_nvme_attach_controller" 00:11:40.766 } 00:11:40.766 EOF 00:11:40.766 )") 00:11:40.766 09:22:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:11:40.766 09:22:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:11:40.766 09:22:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:11:40.766 09:22:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:40.766 "params": { 00:11:40.766 "name": "Nvme0", 00:11:40.766 "trtype": "tcp", 00:11:40.766 "traddr": "10.0.0.3", 00:11:40.766 "adrfam": "ipv4", 00:11:40.766 "trsvcid": "4420", 00:11:40.766 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:40.766 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:40.766 "hdgst": false, 00:11:40.766 "ddgst": false 00:11:40.766 }, 00:11:40.766 "method": "bdev_nvme_attach_controller" 00:11:40.766 }' 00:11:40.766 [2024-12-09 09:22:18.159186] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:11:40.766 [2024-12-09 09:22:18.159271] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62383 ] 00:11:40.766 [2024-12-09 09:22:18.313028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.766 [2024-12-09 09:22:18.368581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.766 [2024-12-09 09:22:18.419165] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:41.023 Running I/O for 1 seconds... 00:11:41.958 1920.00 IOPS, 120.00 MiB/s 00:11:41.958 Latency(us) 00:11:41.958 [2024-12-09T09:22:19.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:41.958 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:41.958 Verification LBA range: start 0x0 length 0x400 00:11:41.958 Nvme0n1 : 1.01 1969.04 123.06 0.00 0.00 31994.23 3329.44 29688.60 00:11:41.958 [2024-12-09T09:22:19.681Z] =================================================================================================================== 00:11:41.958 [2024-12-09T09:22:19.681Z] Total : 1969.04 123.06 0.00 0.00 31994.23 3329.44 29688.60 00:11:42.218 09:22:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:11:42.218 09:22:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:11:42.218 09:22:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:11:42.218 09:22:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:11:42.218 09:22:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:11:42.218 09:22:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:42.218 09:22:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:11:42.218 09:22:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:42.218 09:22:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:11:42.218 09:22:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:42.218 09:22:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:42.218 rmmod nvme_tcp 00:11:42.218 rmmod nvme_fabrics 00:11:42.218 rmmod nvme_keyring 00:11:42.218 09:22:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:42.218 09:22:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:11:42.218 09:22:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:11:42.218 09:22:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 62291 ']' 00:11:42.218 09:22:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 62291 00:11:42.218 09:22:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 62291 ']' 00:11:42.218 09:22:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 62291 00:11:42.218 09:22:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:11:42.218 09:22:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:42.218 09:22:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62291 00:11:42.218 09:22:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:42.218 09:22:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:42.218 09:22:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62291' 00:11:42.218 killing process with pid 62291 00:11:42.218 09:22:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 62291 00:11:42.218 09:22:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 62291 00:11:42.476 [2024-12-09 09:22:20.102379] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:11:42.476 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:42.476 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:42.476 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:42.476 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:11:42.476 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:11:42.476 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:42.476 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:11:42.476 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:42.476 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:42.476 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:42.476 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:42.476 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:42.735 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:42.735 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:42.735 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:42.735 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:42.735 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:42.735 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:42.735 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:42.735 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:42.735 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:42.735 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:42.735 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:42.735 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.735 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:42.735 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.994 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:11:42.994 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:11:42.994 00:11:42.994 real 0m6.704s 00:11:42.994 user 0m23.064s 00:11:42.994 sys 0m1.961s 00:11:42.994 ************************************ 00:11:42.994 END TEST nvmf_host_management 00:11:42.994 ************************************ 00:11:42.994 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:42.994 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:42.994 09:22:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:42.994 09:22:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:42.994 09:22:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:42.994 09:22:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:42.994 ************************************ 00:11:42.994 START TEST nvmf_lvol 00:11:42.994 ************************************ 00:11:42.994 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:42.994 * Looking for test storage... 00:11:42.994 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:42.994 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:42.994 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:11:42.994 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:43.253 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:43.253 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:43.253 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:43.253 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:43.253 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:11:43.253 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:11:43.253 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:11:43.253 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:11:43.253 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:11:43.253 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:11:43.253 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:11:43.253 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:43.253 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:11:43.253 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:11:43.253 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:43.253 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:43.253 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:11:43.253 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:11:43.253 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:43.253 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:11:43.253 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:11:43.253 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:11:43.253 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:11:43.253 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:43.253 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:11:43.253 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:11:43.253 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:43.253 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:43.253 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:11:43.253 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:43.253 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:43.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.253 --rc genhtml_branch_coverage=1 00:11:43.253 --rc genhtml_function_coverage=1 00:11:43.253 --rc genhtml_legend=1 00:11:43.253 --rc geninfo_all_blocks=1 00:11:43.253 --rc geninfo_unexecuted_blocks=1 00:11:43.253 00:11:43.253 ' 00:11:43.253 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:43.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.253 --rc genhtml_branch_coverage=1 00:11:43.253 --rc genhtml_function_coverage=1 00:11:43.253 --rc genhtml_legend=1 00:11:43.253 --rc geninfo_all_blocks=1 00:11:43.253 --rc geninfo_unexecuted_blocks=1 00:11:43.253 00:11:43.253 ' 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:43.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.254 --rc genhtml_branch_coverage=1 00:11:43.254 --rc genhtml_function_coverage=1 00:11:43.254 --rc genhtml_legend=1 00:11:43.254 --rc geninfo_all_blocks=1 00:11:43.254 --rc geninfo_unexecuted_blocks=1 00:11:43.254 00:11:43.254 ' 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:43.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.254 --rc genhtml_branch_coverage=1 00:11:43.254 --rc genhtml_function_coverage=1 00:11:43.254 --rc genhtml_legend=1 00:11:43.254 --rc geninfo_all_blocks=1 00:11:43.254 --rc geninfo_unexecuted_blocks=1 00:11:43.254 00:11:43.254 ' 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:43.254 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:43.254 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:43.255 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:43.255 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:43.255 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:43.255 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:43.255 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:43.255 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:43.255 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:43.255 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:43.255 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:43.255 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:43.255 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:43.255 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:43.255 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:43.255 Cannot find device "nvmf_init_br" 00:11:43.255 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:11:43.255 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:43.255 Cannot find device "nvmf_init_br2" 00:11:43.255 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:11:43.255 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:43.255 Cannot find device "nvmf_tgt_br" 00:11:43.255 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:11:43.255 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:43.255 Cannot find device "nvmf_tgt_br2" 00:11:43.255 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:11:43.255 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:43.255 Cannot find device "nvmf_init_br" 00:11:43.255 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:11:43.255 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:43.255 Cannot find device "nvmf_init_br2" 00:11:43.255 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:11:43.255 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:43.255 Cannot find device "nvmf_tgt_br" 00:11:43.255 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:11:43.255 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:43.255 Cannot find device "nvmf_tgt_br2" 00:11:43.255 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:11:43.255 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:43.513 Cannot find device "nvmf_br" 00:11:43.513 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:11:43.513 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:43.514 Cannot find device "nvmf_init_if" 00:11:43.514 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:11:43.514 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:43.514 Cannot find device "nvmf_init_if2" 00:11:43.514 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:11:43.514 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:43.514 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:43.514 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:11:43.514 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:43.514 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:43.514 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:11:43.514 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:43.514 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:43.514 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:43.514 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:43.514 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:43.514 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:43.514 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:43.514 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:43.514 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:43.514 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:43.514 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:43.514 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:43.514 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:43.514 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:43.514 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:43.514 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:43.514 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:43.514 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:43.514 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:43.514 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:43.514 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:43.514 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:43.514 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:43.773 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:43.773 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:43.773 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:43.773 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:43.773 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:43.773 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:43.773 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:43.773 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:43.773 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:43.773 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:43.773 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:43.773 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.129 ms 00:11:43.773 00:11:43.773 --- 10.0.0.3 ping statistics --- 00:11:43.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.773 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:11:43.773 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:43.773 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:43.773 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.086 ms 00:11:43.773 00:11:43.773 --- 10.0.0.4 ping statistics --- 00:11:43.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.773 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:11:43.773 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:43.773 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:43.773 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:11:43.773 00:11:43.773 --- 10.0.0.1 ping statistics --- 00:11:43.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.773 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:11:43.773 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:43.773 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:43.773 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:11:43.773 00:11:43.773 --- 10.0.0.2 ping statistics --- 00:11:43.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.773 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:11:43.773 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:43.773 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:11:43.773 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:43.773 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:43.773 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:43.773 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:43.773 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:43.773 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:43.773 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:43.773 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:11:43.773 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:43.773 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:43.773 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:43.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.773 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=62657 00:11:43.773 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 62657 00:11:43.773 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 62657 ']' 00:11:43.773 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.773 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:43.773 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.773 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:43.773 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:43.773 09:22:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:43.773 [2024-12-09 09:22:21.432908] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:11:43.773 [2024-12-09 09:22:21.432982] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:44.032 [2024-12-09 09:22:21.585574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:44.032 [2024-12-09 09:22:21.637917] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:44.032 [2024-12-09 09:22:21.638162] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:44.032 [2024-12-09 09:22:21.638178] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:44.032 [2024-12-09 09:22:21.638187] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:44.032 [2024-12-09 09:22:21.638194] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:44.032 [2024-12-09 09:22:21.639126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:44.032 [2024-12-09 09:22:21.639186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:44.032 [2024-12-09 09:22:21.639190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.032 [2024-12-09 09:22:21.681533] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:44.600 09:22:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:44.600 09:22:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:11:44.600 09:22:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:44.600 09:22:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:44.600 09:22:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:44.858 09:22:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:44.858 09:22:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:45.116 [2024-12-09 09:22:22.616314] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:45.116 09:22:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:45.374 09:22:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:11:45.374 09:22:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:45.631 09:22:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:11:45.631 09:22:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:11:45.631 09:22:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:11:45.890 09:22:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=0f1c4cde-1307-4c9c-a872-d7ae59a3e0d8 00:11:45.890 09:22:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0f1c4cde-1307-4c9c-a872-d7ae59a3e0d8 lvol 20 00:11:46.148 09:22:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=75334dd3-20f2-4dfb-9f61-a480f39807ad 00:11:46.148 09:22:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:46.406 09:22:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 75334dd3-20f2-4dfb-9f61-a480f39807ad 00:11:46.665 09:22:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:11:46.924 [2024-12-09 09:22:24.447556] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:46.924 09:22:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:11:47.182 09:22:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:11:47.182 09:22:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=62733 00:11:47.182 09:22:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:11:48.136 09:22:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 75334dd3-20f2-4dfb-9f61-a480f39807ad MY_SNAPSHOT 00:11:48.394 09:22:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=5ce6fec2-ea62-4eda-8c38-ce6eeb5e14ac 00:11:48.394 09:22:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 75334dd3-20f2-4dfb-9f61-a480f39807ad 30 00:11:48.654 09:22:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 5ce6fec2-ea62-4eda-8c38-ce6eeb5e14ac MY_CLONE 00:11:48.913 09:22:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=cb568c20-5763-4351-a503-110f23bd1ac8 00:11:48.913 09:22:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate cb568c20-5763-4351-a503-110f23bd1ac8 00:11:49.482 09:22:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 62733 00:11:57.713 Initializing NVMe Controllers 00:11:57.713 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:11:57.713 Controller IO queue size 128, less than required. 00:11:57.713 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:57.713 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:11:57.713 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:11:57.713 Initialization complete. Launching workers. 00:11:57.713 ======================================================== 00:11:57.713 Latency(us) 00:11:57.713 Device Information : IOPS MiB/s Average min max 00:11:57.713 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10587.30 41.36 12090.03 1857.48 53621.61 00:11:57.713 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10477.00 40.93 12217.20 4109.81 94779.51 00:11:57.713 ======================================================== 00:11:57.713 Total : 21064.30 82.28 12153.29 1857.48 94779.51 00:11:57.713 00:11:57.713 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:57.713 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 75334dd3-20f2-4dfb-9f61-a480f39807ad 00:11:57.971 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0f1c4cde-1307-4c9c-a872-d7ae59a3e0d8 00:11:58.229 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:11:58.229 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:11:58.229 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:11:58.229 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:58.229 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:11:58.229 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:58.229 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:11:58.229 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:58.229 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:58.229 rmmod nvme_tcp 00:11:58.229 rmmod nvme_fabrics 00:11:58.229 rmmod nvme_keyring 00:11:58.229 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:58.229 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:11:58.229 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:11:58.229 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 62657 ']' 00:11:58.229 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 62657 00:11:58.229 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 62657 ']' 00:11:58.229 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 62657 00:11:58.229 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:11:58.229 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:58.229 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62657 00:11:58.229 killing process with pid 62657 00:11:58.229 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:58.229 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:58.229 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62657' 00:11:58.229 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 62657 00:11:58.229 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 62657 00:11:58.487 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:58.487 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:58.487 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:58.488 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:11:58.488 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:11:58.488 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:11:58.488 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:58.488 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:58.488 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:58.488 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:58.488 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:58.488 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:58.488 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:58.488 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:58.488 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:58.488 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:58.746 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:58.746 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:58.746 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:58.746 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:58.746 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:58.746 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:58.746 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:58.746 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:58.746 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:58.746 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:58.746 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:11:58.746 ************************************ 00:11:58.746 END TEST nvmf_lvol 00:11:58.746 ************************************ 00:11:58.746 00:11:58.746 real 0m15.871s 00:11:58.746 user 1m2.576s 00:11:58.746 sys 0m5.750s 00:11:58.746 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:58.746 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:58.746 09:22:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:58.746 09:22:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:58.746 09:22:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:58.746 09:22:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:59.004 ************************************ 00:11:59.004 START TEST nvmf_lvs_grow 00:11:59.004 ************************************ 00:11:59.004 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:59.004 * Looking for test storage... 00:11:59.004 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:59.004 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:59.004 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:11:59.004 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:59.004 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:59.004 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:59.004 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:59.005 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:59.005 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:11:59.005 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:11:59.005 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:11:59.005 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:11:59.005 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:11:59.005 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:11:59.005 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:11:59.005 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:59.005 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:11:59.005 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:11:59.005 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:59.005 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:59.005 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:11:59.005 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:11:59.005 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:59.005 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:11:59.005 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:11:59.005 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:11:59.005 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:11:59.005 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:59.005 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:11:59.005 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:11:59.005 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:59.005 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:59.005 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:11:59.005 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:59.005 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:59.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.005 --rc genhtml_branch_coverage=1 00:11:59.005 --rc genhtml_function_coverage=1 00:11:59.005 --rc genhtml_legend=1 00:11:59.005 --rc geninfo_all_blocks=1 00:11:59.005 --rc geninfo_unexecuted_blocks=1 00:11:59.005 00:11:59.005 ' 00:11:59.005 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:59.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.005 --rc genhtml_branch_coverage=1 00:11:59.005 --rc genhtml_function_coverage=1 00:11:59.005 --rc genhtml_legend=1 00:11:59.005 --rc geninfo_all_blocks=1 00:11:59.005 --rc geninfo_unexecuted_blocks=1 00:11:59.005 00:11:59.005 ' 00:11:59.005 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:59.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.005 --rc genhtml_branch_coverage=1 00:11:59.005 --rc genhtml_function_coverage=1 00:11:59.005 --rc genhtml_legend=1 00:11:59.005 --rc geninfo_all_blocks=1 00:11:59.005 --rc geninfo_unexecuted_blocks=1 00:11:59.005 00:11:59.005 ' 00:11:59.005 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:59.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.005 --rc genhtml_branch_coverage=1 00:11:59.005 --rc genhtml_function_coverage=1 00:11:59.005 --rc genhtml_legend=1 00:11:59.005 --rc geninfo_all_blocks=1 00:11:59.005 --rc geninfo_unexecuted_blocks=1 00:11:59.005 00:11:59.005 ' 00:11:59.005 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:59.005 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:11:59.005 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:59.005 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:59.005 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:59.005 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:59.005 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:59.005 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:59.005 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:59.005 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:59.005 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:59.005 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:59.264 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:11:59.264 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:11:59.264 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:59.264 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:59.264 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:59.264 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:59.264 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:59.264 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:11:59.264 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:59.264 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:59.264 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:59.264 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:59.265 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:59.265 Cannot find device "nvmf_init_br" 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:59.265 Cannot find device "nvmf_init_br2" 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:59.265 Cannot find device "nvmf_tgt_br" 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:59.265 Cannot find device "nvmf_tgt_br2" 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:59.265 Cannot find device "nvmf_init_br" 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:59.265 Cannot find device "nvmf_init_br2" 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:59.265 Cannot find device "nvmf_tgt_br" 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:59.265 Cannot find device "nvmf_tgt_br2" 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:59.265 Cannot find device "nvmf_br" 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:59.265 Cannot find device "nvmf_init_if" 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:59.265 Cannot find device "nvmf_init_if2" 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:11:59.265 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:59.265 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:59.266 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:11:59.266 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:59.266 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:59.266 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:11:59.266 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:59.266 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:59.266 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:59.524 09:22:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:59.524 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:59.524 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:59.524 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:59.524 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:59.524 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:59.525 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:59.525 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:59.525 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:59.525 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:59.525 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:59.525 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:59.525 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:59.525 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:59.525 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:59.525 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:59.525 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:59.525 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:59.525 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:59.525 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:59.525 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:59.525 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:59.525 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:59.525 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:59.525 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:59.525 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:59.525 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:59.525 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:59.525 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:59.815 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:59.815 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:59.815 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.115 ms 00:11:59.815 00:11:59.815 --- 10.0.0.3 ping statistics --- 00:11:59.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.815 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:11:59.815 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:59.815 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:59.815 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.090 ms 00:11:59.815 00:11:59.815 --- 10.0.0.4 ping statistics --- 00:11:59.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.815 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:11:59.815 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:59.815 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:59.815 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:11:59.815 00:11:59.815 --- 10.0.0.1 ping statistics --- 00:11:59.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.815 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:11:59.815 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:59.815 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:59.815 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:11:59.815 00:11:59.815 --- 10.0.0.2 ping statistics --- 00:11:59.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.815 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:11:59.815 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:59.815 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:11:59.815 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:59.815 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:59.815 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:59.815 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:59.815 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:59.815 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:59.815 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:59.815 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:11:59.815 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:59.815 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:59.815 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:59.815 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=63112 00:11:59.815 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:59.815 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 63112 00:11:59.815 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 63112 ']' 00:11:59.815 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.815 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:59.815 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.815 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:59.815 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:59.815 [2024-12-09 09:22:37.382339] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:11:59.815 [2024-12-09 09:22:37.382481] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:00.073 [2024-12-09 09:22:37.546446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.073 [2024-12-09 09:22:37.602554] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:00.073 [2024-12-09 09:22:37.602606] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:00.073 [2024-12-09 09:22:37.602616] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:00.073 [2024-12-09 09:22:37.602626] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:00.073 [2024-12-09 09:22:37.602633] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:00.073 [2024-12-09 09:22:37.602943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.073 [2024-12-09 09:22:37.646518] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:00.638 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:00.638 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:12:00.638 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:00.638 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:00.638 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:00.896 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:00.896 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:00.896 [2024-12-09 09:22:38.566140] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:00.896 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:12:00.896 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:00.896 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:00.896 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:00.896 ************************************ 00:12:00.896 START TEST lvs_grow_clean 00:12:00.896 ************************************ 00:12:00.896 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:12:00.896 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:00.896 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:00.896 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:00.896 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:00.896 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:00.896 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:00.896 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:00.896 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:00.896 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:01.155 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:01.155 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:01.413 09:22:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=46c54f44-8e3d-44dd-b3fb-021fd4bd8c92 00:12:01.413 09:22:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 46c54f44-8e3d-44dd-b3fb-021fd4bd8c92 00:12:01.413 09:22:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:01.670 09:22:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:01.670 09:22:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:01.670 09:22:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 46c54f44-8e3d-44dd-b3fb-021fd4bd8c92 lvol 150 00:12:01.928 09:22:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=6110febb-d42f-4765-8ab0-807c01ccf3ef 00:12:01.928 09:22:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:01.928 09:22:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:02.185 [2024-12-09 09:22:39.871878] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:02.185 [2024-12-09 09:22:39.871958] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:02.185 true 00:12:02.185 09:22:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 46c54f44-8e3d-44dd-b3fb-021fd4bd8c92 00:12:02.185 09:22:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:02.441 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:02.441 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:02.698 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6110febb-d42f-4765-8ab0-807c01ccf3ef 00:12:02.954 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:12:03.235 [2024-12-09 09:22:40.834792] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:03.235 09:22:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:12:03.520 09:22:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:03.520 09:22:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63189 00:12:03.520 09:22:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:03.520 09:22:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63189 /var/tmp/bdevperf.sock 00:12:03.520 09:22:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 63189 ']' 00:12:03.520 09:22:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:03.520 09:22:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:03.520 09:22:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:03.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:03.520 09:22:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:03.520 09:22:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:03.520 [2024-12-09 09:22:41.097865] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:12:03.520 [2024-12-09 09:22:41.098140] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63189 ] 00:12:03.778 [2024-12-09 09:22:41.244252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.778 [2024-12-09 09:22:41.288716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:03.778 [2024-12-09 09:22:41.332414] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:04.343 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:04.343 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:12:04.343 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:04.602 Nvme0n1 00:12:04.602 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:04.861 [ 00:12:04.861 { 00:12:04.861 "name": "Nvme0n1", 00:12:04.861 "aliases": [ 00:12:04.861 "6110febb-d42f-4765-8ab0-807c01ccf3ef" 00:12:04.861 ], 00:12:04.861 "product_name": "NVMe disk", 00:12:04.861 "block_size": 4096, 00:12:04.861 "num_blocks": 38912, 00:12:04.861 "uuid": "6110febb-d42f-4765-8ab0-807c01ccf3ef", 00:12:04.861 "numa_id": -1, 00:12:04.861 "assigned_rate_limits": { 00:12:04.861 "rw_ios_per_sec": 0, 00:12:04.861 "rw_mbytes_per_sec": 0, 00:12:04.861 "r_mbytes_per_sec": 0, 00:12:04.861 "w_mbytes_per_sec": 0 00:12:04.861 }, 00:12:04.861 "claimed": false, 00:12:04.861 "zoned": false, 00:12:04.861 "supported_io_types": { 00:12:04.861 "read": true, 00:12:04.861 "write": true, 00:12:04.861 "unmap": true, 00:12:04.861 "flush": true, 00:12:04.861 "reset": true, 00:12:04.861 "nvme_admin": true, 00:12:04.861 "nvme_io": true, 00:12:04.861 "nvme_io_md": false, 00:12:04.861 "write_zeroes": true, 00:12:04.861 "zcopy": false, 00:12:04.861 "get_zone_info": false, 00:12:04.861 "zone_management": false, 00:12:04.861 "zone_append": false, 00:12:04.861 "compare": true, 00:12:04.861 "compare_and_write": true, 00:12:04.861 "abort": true, 00:12:04.861 "seek_hole": false, 00:12:04.861 "seek_data": false, 00:12:04.861 "copy": true, 00:12:04.861 "nvme_iov_md": false 00:12:04.861 }, 00:12:04.861 "memory_domains": [ 00:12:04.861 { 00:12:04.861 "dma_device_id": "system", 00:12:04.861 "dma_device_type": 1 00:12:04.861 } 00:12:04.861 ], 00:12:04.861 "driver_specific": { 00:12:04.861 "nvme": [ 00:12:04.861 { 00:12:04.861 "trid": { 00:12:04.861 "trtype": "TCP", 00:12:04.861 "adrfam": "IPv4", 00:12:04.861 "traddr": "10.0.0.3", 00:12:04.861 "trsvcid": "4420", 00:12:04.861 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:04.861 }, 00:12:04.861 "ctrlr_data": { 00:12:04.861 "cntlid": 1, 00:12:04.861 "vendor_id": "0x8086", 00:12:04.861 "model_number": "SPDK bdev Controller", 00:12:04.861 "serial_number": "SPDK0", 00:12:04.861 "firmware_revision": "25.01", 00:12:04.861 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:04.861 "oacs": { 00:12:04.861 "security": 0, 00:12:04.861 "format": 0, 00:12:04.861 "firmware": 0, 00:12:04.861 "ns_manage": 0 00:12:04.861 }, 00:12:04.861 "multi_ctrlr": true, 00:12:04.861 "ana_reporting": false 00:12:04.861 }, 00:12:04.861 "vs": { 00:12:04.861 "nvme_version": "1.3" 00:12:04.861 }, 00:12:04.861 "ns_data": { 00:12:04.861 "id": 1, 00:12:04.861 "can_share": true 00:12:04.861 } 00:12:04.861 } 00:12:04.861 ], 00:12:04.861 "mp_policy": "active_passive" 00:12:04.861 } 00:12:04.861 } 00:12:04.861 ] 00:12:04.861 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63218 00:12:04.861 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:04.861 09:22:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:05.118 Running I/O for 10 seconds... 00:12:06.053 Latency(us) 00:12:06.053 [2024-12-09T09:22:43.776Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:06.053 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:06.053 Nvme0n1 : 1.00 9356.00 36.55 0.00 0.00 0.00 0.00 0.00 00:12:06.053 [2024-12-09T09:22:43.776Z] =================================================================================================================== 00:12:06.053 [2024-12-09T09:22:43.776Z] Total : 9356.00 36.55 0.00 0.00 0.00 0.00 0.00 00:12:06.053 00:12:06.991 09:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 46c54f44-8e3d-44dd-b3fb-021fd4bd8c92 00:12:06.991 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:06.991 Nvme0n1 : 2.00 9250.00 36.13 0.00 0.00 0.00 0.00 0.00 00:12:06.991 [2024-12-09T09:22:44.714Z] =================================================================================================================== 00:12:06.991 [2024-12-09T09:22:44.714Z] Total : 9250.00 36.13 0.00 0.00 0.00 0.00 0.00 00:12:06.991 00:12:07.249 true 00:12:07.249 09:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 46c54f44-8e3d-44dd-b3fb-021fd4bd8c92 00:12:07.249 09:22:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:07.507 09:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:07.507 09:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:07.507 09:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 63218 00:12:08.075 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:08.075 Nvme0n1 : 3.00 8529.00 33.32 0.00 0.00 0.00 0.00 0.00 00:12:08.075 [2024-12-09T09:22:45.798Z] =================================================================================================================== 00:12:08.075 [2024-12-09T09:22:45.798Z] Total : 8529.00 33.32 0.00 0.00 0.00 0.00 0.00 00:12:08.075 00:12:09.012 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:09.012 Nvme0n1 : 4.00 8619.25 33.67 0.00 0.00 0.00 0.00 0.00 00:12:09.012 [2024-12-09T09:22:46.735Z] =================================================================================================================== 00:12:09.012 [2024-12-09T09:22:46.735Z] Total : 8619.25 33.67 0.00 0.00 0.00 0.00 0.00 00:12:09.012 00:12:09.987 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:09.987 Nvme0n1 : 5.00 8647.60 33.78 0.00 0.00 0.00 0.00 0.00 00:12:09.987 [2024-12-09T09:22:47.710Z] =================================================================================================================== 00:12:09.987 [2024-12-09T09:22:47.710Z] Total : 8647.60 33.78 0.00 0.00 0.00 0.00 0.00 00:12:09.987 00:12:10.923 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:10.923 Nvme0n1 : 6.00 8688.00 33.94 0.00 0.00 0.00 0.00 0.00 00:12:10.923 [2024-12-09T09:22:48.646Z] =================================================================================================================== 00:12:10.923 [2024-12-09T09:22:48.646Z] Total : 8688.00 33.94 0.00 0.00 0.00 0.00 0.00 00:12:10.923 00:12:12.296 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:12.297 Nvme0n1 : 7.00 8698.71 33.98 0.00 0.00 0.00 0.00 0.00 00:12:12.297 [2024-12-09T09:22:50.020Z] =================================================================================================================== 00:12:12.297 [2024-12-09T09:22:50.020Z] Total : 8698.71 33.98 0.00 0.00 0.00 0.00 0.00 00:12:12.297 00:12:13.229 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:13.229 Nvme0n1 : 8.00 8642.50 33.76 0.00 0.00 0.00 0.00 0.00 00:12:13.229 [2024-12-09T09:22:50.952Z] =================================================================================================================== 00:12:13.229 [2024-12-09T09:22:50.952Z] Total : 8642.50 33.76 0.00 0.00 0.00 0.00 0.00 00:12:13.229 00:12:14.168 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:14.168 Nvme0n1 : 9.00 8641.78 33.76 0.00 0.00 0.00 0.00 0.00 00:12:14.168 [2024-12-09T09:22:51.891Z] =================================================================================================================== 00:12:14.168 [2024-12-09T09:22:51.891Z] Total : 8641.78 33.76 0.00 0.00 0.00 0.00 0.00 00:12:14.168 00:12:15.109 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:15.109 Nvme0n1 : 10.00 8628.50 33.71 0.00 0.00 0.00 0.00 0.00 00:12:15.109 [2024-12-09T09:22:52.832Z] =================================================================================================================== 00:12:15.109 [2024-12-09T09:22:52.832Z] Total : 8628.50 33.71 0.00 0.00 0.00 0.00 0.00 00:12:15.109 00:12:15.109 00:12:15.109 Latency(us) 00:12:15.109 [2024-12-09T09:22:52.832Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:15.109 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:15.109 Nvme0n1 : 10.01 8634.78 33.73 0.00 0.00 14818.81 7369.51 249300.00 00:12:15.109 [2024-12-09T09:22:52.832Z] =================================================================================================================== 00:12:15.109 [2024-12-09T09:22:52.832Z] Total : 8634.78 33.73 0.00 0.00 14818.81 7369.51 249300.00 00:12:15.109 { 00:12:15.109 "results": [ 00:12:15.109 { 00:12:15.109 "job": "Nvme0n1", 00:12:15.109 "core_mask": "0x2", 00:12:15.109 "workload": "randwrite", 00:12:15.109 "status": "finished", 00:12:15.109 "queue_depth": 128, 00:12:15.109 "io_size": 4096, 00:12:15.109 "runtime": 10.007546, 00:12:15.109 "iops": 8634.78419184883, 00:12:15.109 "mibps": 33.72962574940949, 00:12:15.109 "io_failed": 0, 00:12:15.109 "io_timeout": 0, 00:12:15.109 "avg_latency_us": 14818.805442231125, 00:12:15.109 "min_latency_us": 7369.510040160642, 00:12:15.109 "max_latency_us": 249299.9967871486 00:12:15.109 } 00:12:15.109 ], 00:12:15.109 "core_count": 1 00:12:15.109 } 00:12:15.109 09:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63189 00:12:15.109 09:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 63189 ']' 00:12:15.109 09:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 63189 00:12:15.109 09:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:12:15.109 09:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:15.109 09:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63189 00:12:15.109 killing process with pid 63189 00:12:15.109 Received shutdown signal, test time was about 10.000000 seconds 00:12:15.109 00:12:15.109 Latency(us) 00:12:15.109 [2024-12-09T09:22:52.832Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:15.109 [2024-12-09T09:22:52.832Z] =================================================================================================================== 00:12:15.109 [2024-12-09T09:22:52.832Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:15.109 09:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:15.109 09:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:15.109 09:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63189' 00:12:15.109 09:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 63189 00:12:15.109 09:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 63189 00:12:15.369 09:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:12:15.633 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:15.905 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 46c54f44-8e3d-44dd-b3fb-021fd4bd8c92 00:12:15.905 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:16.165 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:16.165 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:12:16.165 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:16.425 [2024-12-09 09:22:54.057059] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:16.425 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 46c54f44-8e3d-44dd-b3fb-021fd4bd8c92 00:12:16.425 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:12:16.425 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 46c54f44-8e3d-44dd-b3fb-021fd4bd8c92 00:12:16.425 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:16.425 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:16.425 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:16.425 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:16.425 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:16.425 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:16.425 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:16.425 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:16.425 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 46c54f44-8e3d-44dd-b3fb-021fd4bd8c92 00:12:16.685 request: 00:12:16.685 { 00:12:16.685 "uuid": "46c54f44-8e3d-44dd-b3fb-021fd4bd8c92", 00:12:16.685 "method": "bdev_lvol_get_lvstores", 00:12:16.685 "req_id": 1 00:12:16.685 } 00:12:16.685 Got JSON-RPC error response 00:12:16.685 response: 00:12:16.685 { 00:12:16.685 "code": -19, 00:12:16.685 "message": "No such device" 00:12:16.685 } 00:12:16.685 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:12:16.685 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:16.685 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:16.685 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:16.685 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:16.945 aio_bdev 00:12:16.945 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6110febb-d42f-4765-8ab0-807c01ccf3ef 00:12:16.945 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=6110febb-d42f-4765-8ab0-807c01ccf3ef 00:12:16.945 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:16.945 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:12:16.945 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:16.945 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:16.945 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:17.205 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6110febb-d42f-4765-8ab0-807c01ccf3ef -t 2000 00:12:17.464 [ 00:12:17.464 { 00:12:17.464 "name": "6110febb-d42f-4765-8ab0-807c01ccf3ef", 00:12:17.464 "aliases": [ 00:12:17.464 "lvs/lvol" 00:12:17.464 ], 00:12:17.464 "product_name": "Logical Volume", 00:12:17.464 "block_size": 4096, 00:12:17.464 "num_blocks": 38912, 00:12:17.464 "uuid": "6110febb-d42f-4765-8ab0-807c01ccf3ef", 00:12:17.464 "assigned_rate_limits": { 00:12:17.464 "rw_ios_per_sec": 0, 00:12:17.464 "rw_mbytes_per_sec": 0, 00:12:17.464 "r_mbytes_per_sec": 0, 00:12:17.464 "w_mbytes_per_sec": 0 00:12:17.464 }, 00:12:17.464 "claimed": false, 00:12:17.464 "zoned": false, 00:12:17.464 "supported_io_types": { 00:12:17.464 "read": true, 00:12:17.464 "write": true, 00:12:17.464 "unmap": true, 00:12:17.464 "flush": false, 00:12:17.464 "reset": true, 00:12:17.464 "nvme_admin": false, 00:12:17.464 "nvme_io": false, 00:12:17.464 "nvme_io_md": false, 00:12:17.464 "write_zeroes": true, 00:12:17.464 "zcopy": false, 00:12:17.464 "get_zone_info": false, 00:12:17.464 "zone_management": false, 00:12:17.464 "zone_append": false, 00:12:17.464 "compare": false, 00:12:17.465 "compare_and_write": false, 00:12:17.465 "abort": false, 00:12:17.465 "seek_hole": true, 00:12:17.465 "seek_data": true, 00:12:17.465 "copy": false, 00:12:17.465 "nvme_iov_md": false 00:12:17.465 }, 00:12:17.465 "driver_specific": { 00:12:17.465 "lvol": { 00:12:17.465 "lvol_store_uuid": "46c54f44-8e3d-44dd-b3fb-021fd4bd8c92", 00:12:17.465 "base_bdev": "aio_bdev", 00:12:17.465 "thin_provision": false, 00:12:17.465 "num_allocated_clusters": 38, 00:12:17.465 "snapshot": false, 00:12:17.465 "clone": false, 00:12:17.465 "esnap_clone": false 00:12:17.465 } 00:12:17.465 } 00:12:17.465 } 00:12:17.465 ] 00:12:17.465 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:12:17.465 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 46c54f44-8e3d-44dd-b3fb-021fd4bd8c92 00:12:17.465 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:17.722 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:17.722 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 46c54f44-8e3d-44dd-b3fb-021fd4bd8c92 00:12:17.722 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:17.980 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:17.980 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 6110febb-d42f-4765-8ab0-807c01ccf3ef 00:12:18.239 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 46c54f44-8e3d-44dd-b3fb-021fd4bd8c92 00:12:18.498 09:22:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:18.498 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:19.065 00:12:19.065 real 0m18.027s 00:12:19.065 user 0m15.986s 00:12:19.065 sys 0m3.293s 00:12:19.065 ************************************ 00:12:19.065 END TEST lvs_grow_clean 00:12:19.065 ************************************ 00:12:19.065 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:19.065 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:19.065 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:12:19.065 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:19.065 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:19.065 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:19.065 ************************************ 00:12:19.065 START TEST lvs_grow_dirty 00:12:19.065 ************************************ 00:12:19.065 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:12:19.065 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:19.065 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:19.065 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:19.065 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:19.065 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:19.065 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:19.065 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:19.065 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:19.065 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:19.324 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:19.324 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:19.582 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=db5a41e3-a163-4977-8e9b-668dc2ef9b8b 00:12:19.582 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db5a41e3-a163-4977-8e9b-668dc2ef9b8b 00:12:19.582 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:19.841 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:19.841 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:19.841 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u db5a41e3-a163-4977-8e9b-668dc2ef9b8b lvol 150 00:12:20.099 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=f5893704-0ac1-4cbd-b2c5-464b456f74f0 00:12:20.099 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:20.099 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:20.357 [2024-12-09 09:22:57.894617] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:20.357 [2024-12-09 09:22:57.894975] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:20.357 true 00:12:20.357 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db5a41e3-a163-4977-8e9b-668dc2ef9b8b 00:12:20.357 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:20.722 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:20.722 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:20.722 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f5893704-0ac1-4cbd-b2c5-464b456f74f0 00:12:20.987 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:12:21.246 [2024-12-09 09:22:58.770537] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:21.246 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:12:21.504 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63459 00:12:21.504 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:21.504 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:21.504 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63459 /var/tmp/bdevperf.sock 00:12:21.504 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 63459 ']' 00:12:21.504 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:21.504 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:21.504 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:21.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:21.504 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:21.504 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:21.504 [2024-12-09 09:22:59.042321] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:12:21.504 [2024-12-09 09:22:59.042402] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63459 ] 00:12:21.504 [2024-12-09 09:22:59.177902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.761 [2024-12-09 09:22:59.242815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:21.761 [2024-12-09 09:22:59.284625] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:22.326 09:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:22.326 09:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:12:22.326 09:22:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:22.584 Nvme0n1 00:12:22.584 09:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:22.843 [ 00:12:22.843 { 00:12:22.843 "name": "Nvme0n1", 00:12:22.843 "aliases": [ 00:12:22.843 "f5893704-0ac1-4cbd-b2c5-464b456f74f0" 00:12:22.843 ], 00:12:22.843 "product_name": "NVMe disk", 00:12:22.843 "block_size": 4096, 00:12:22.843 "num_blocks": 38912, 00:12:22.843 "uuid": "f5893704-0ac1-4cbd-b2c5-464b456f74f0", 00:12:22.843 "numa_id": -1, 00:12:22.843 "assigned_rate_limits": { 00:12:22.843 "rw_ios_per_sec": 0, 00:12:22.843 "rw_mbytes_per_sec": 0, 00:12:22.843 "r_mbytes_per_sec": 0, 00:12:22.843 "w_mbytes_per_sec": 0 00:12:22.843 }, 00:12:22.843 "claimed": false, 00:12:22.843 "zoned": false, 00:12:22.843 "supported_io_types": { 00:12:22.843 "read": true, 00:12:22.843 "write": true, 00:12:22.843 "unmap": true, 00:12:22.843 "flush": true, 00:12:22.843 "reset": true, 00:12:22.843 "nvme_admin": true, 00:12:22.843 "nvme_io": true, 00:12:22.843 "nvme_io_md": false, 00:12:22.843 "write_zeroes": true, 00:12:22.843 "zcopy": false, 00:12:22.843 "get_zone_info": false, 00:12:22.843 "zone_management": false, 00:12:22.843 "zone_append": false, 00:12:22.843 "compare": true, 00:12:22.843 "compare_and_write": true, 00:12:22.843 "abort": true, 00:12:22.843 "seek_hole": false, 00:12:22.843 "seek_data": false, 00:12:22.843 "copy": true, 00:12:22.843 "nvme_iov_md": false 00:12:22.843 }, 00:12:22.843 "memory_domains": [ 00:12:22.843 { 00:12:22.843 "dma_device_id": "system", 00:12:22.843 "dma_device_type": 1 00:12:22.843 } 00:12:22.843 ], 00:12:22.843 "driver_specific": { 00:12:22.843 "nvme": [ 00:12:22.843 { 00:12:22.843 "trid": { 00:12:22.843 "trtype": "TCP", 00:12:22.843 "adrfam": "IPv4", 00:12:22.843 "traddr": "10.0.0.3", 00:12:22.843 "trsvcid": "4420", 00:12:22.843 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:22.843 }, 00:12:22.843 "ctrlr_data": { 00:12:22.843 "cntlid": 1, 00:12:22.843 "vendor_id": "0x8086", 00:12:22.843 "model_number": "SPDK bdev Controller", 00:12:22.843 "serial_number": "SPDK0", 00:12:22.843 "firmware_revision": "25.01", 00:12:22.843 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:22.843 "oacs": { 00:12:22.843 "security": 0, 00:12:22.843 "format": 0, 00:12:22.843 "firmware": 0, 00:12:22.843 "ns_manage": 0 00:12:22.843 }, 00:12:22.843 "multi_ctrlr": true, 00:12:22.843 "ana_reporting": false 00:12:22.843 }, 00:12:22.843 "vs": { 00:12:22.843 "nvme_version": "1.3" 00:12:22.843 }, 00:12:22.843 "ns_data": { 00:12:22.843 "id": 1, 00:12:22.843 "can_share": true 00:12:22.843 } 00:12:22.843 } 00:12:22.843 ], 00:12:22.843 "mp_policy": "active_passive" 00:12:22.843 } 00:12:22.843 } 00:12:22.843 ] 00:12:22.843 09:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63477 00:12:22.843 09:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:22.843 09:23:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:22.843 Running I/O for 10 seconds... 00:12:24.221 Latency(us) 00:12:24.221 [2024-12-09T09:23:01.944Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:24.221 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:24.221 Nvme0n1 : 1.00 9652.00 37.70 0.00 0.00 0.00 0.00 0.00 00:12:24.221 [2024-12-09T09:23:01.944Z] =================================================================================================================== 00:12:24.221 [2024-12-09T09:23:01.944Z] Total : 9652.00 37.70 0.00 0.00 0.00 0.00 0.00 00:12:24.221 00:12:24.790 09:23:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u db5a41e3-a163-4977-8e9b-668dc2ef9b8b 00:12:25.049 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:25.049 Nvme0n1 : 2.00 9461.50 36.96 0.00 0.00 0.00 0.00 0.00 00:12:25.049 [2024-12-09T09:23:02.772Z] =================================================================================================================== 00:12:25.049 [2024-12-09T09:23:02.772Z] Total : 9461.50 36.96 0.00 0.00 0.00 0.00 0.00 00:12:25.049 00:12:25.049 true 00:12:25.049 09:23:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:25.049 09:23:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db5a41e3-a163-4977-8e9b-668dc2ef9b8b 00:12:25.309 09:23:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:25.309 09:23:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:25.309 09:23:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 63477 00:12:25.878 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:25.878 Nvme0n1 : 3.00 9186.33 35.88 0.00 0.00 0.00 0.00 0.00 00:12:25.878 [2024-12-09T09:23:03.601Z] =================================================================================================================== 00:12:25.878 [2024-12-09T09:23:03.601Z] Total : 9186.33 35.88 0.00 0.00 0.00 0.00 0.00 00:12:25.878 00:12:26.834 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:26.834 Nvme0n1 : 4.00 9112.25 35.59 0.00 0.00 0.00 0.00 0.00 00:12:26.834 [2024-12-09T09:23:04.557Z] =================================================================================================================== 00:12:26.834 [2024-12-09T09:23:04.557Z] Total : 9112.25 35.59 0.00 0.00 0.00 0.00 0.00 00:12:26.834 00:12:28.212 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:28.212 Nvme0n1 : 5.00 9037.80 35.30 0.00 0.00 0.00 0.00 0.00 00:12:28.212 [2024-12-09T09:23:05.935Z] =================================================================================================================== 00:12:28.212 [2024-12-09T09:23:05.935Z] Total : 9037.80 35.30 0.00 0.00 0.00 0.00 0.00 00:12:28.212 00:12:29.147 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:29.147 Nvme0n1 : 6.00 8970.83 35.04 0.00 0.00 0.00 0.00 0.00 00:12:29.147 [2024-12-09T09:23:06.870Z] =================================================================================================================== 00:12:29.147 [2024-12-09T09:23:06.870Z] Total : 8970.83 35.04 0.00 0.00 0.00 0.00 0.00 00:12:29.147 00:12:30.084 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:30.084 Nvme0n1 : 7.00 8744.86 34.16 0.00 0.00 0.00 0.00 0.00 00:12:30.084 [2024-12-09T09:23:07.807Z] =================================================================================================================== 00:12:30.084 [2024-12-09T09:23:07.807Z] Total : 8744.86 34.16 0.00 0.00 0.00 0.00 0.00 00:12:30.084 00:12:31.020 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:31.020 Nvme0n1 : 8.00 8715.38 34.04 0.00 0.00 0.00 0.00 0.00 00:12:31.020 [2024-12-09T09:23:08.743Z] =================================================================================================================== 00:12:31.020 [2024-12-09T09:23:08.743Z] Total : 8715.38 34.04 0.00 0.00 0.00 0.00 0.00 00:12:31.020 00:12:32.000 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:32.000 Nvme0n1 : 9.00 8702.22 33.99 0.00 0.00 0.00 0.00 0.00 00:12:32.000 [2024-12-09T09:23:09.723Z] =================================================================================================================== 00:12:32.000 [2024-12-09T09:23:09.723Z] Total : 8702.22 33.99 0.00 0.00 0.00 0.00 0.00 00:12:32.000 00:12:32.937 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:32.937 Nvme0n1 : 10.00 8705.70 34.01 0.00 0.00 0.00 0.00 0.00 00:12:32.937 [2024-12-09T09:23:10.660Z] =================================================================================================================== 00:12:32.937 [2024-12-09T09:23:10.660Z] Total : 8705.70 34.01 0.00 0.00 0.00 0.00 0.00 00:12:32.937 00:12:32.937 00:12:32.937 Latency(us) 00:12:32.937 [2024-12-09T09:23:10.660Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:32.937 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:32.937 Nvme0n1 : 10.01 8707.91 34.02 0.00 0.00 14695.71 2974.12 170972.63 00:12:32.937 [2024-12-09T09:23:10.660Z] =================================================================================================================== 00:12:32.937 [2024-12-09T09:23:10.660Z] Total : 8707.91 34.02 0.00 0.00 14695.71 2974.12 170972.63 00:12:32.937 { 00:12:32.937 "results": [ 00:12:32.937 { 00:12:32.937 "job": "Nvme0n1", 00:12:32.937 "core_mask": "0x2", 00:12:32.937 "workload": "randwrite", 00:12:32.937 "status": "finished", 00:12:32.937 "queue_depth": 128, 00:12:32.937 "io_size": 4096, 00:12:32.937 "runtime": 10.012167, 00:12:32.937 "iops": 8707.905091874716, 00:12:32.937 "mibps": 34.01525426513561, 00:12:32.937 "io_failed": 0, 00:12:32.937 "io_timeout": 0, 00:12:32.937 "avg_latency_us": 14695.711854269173, 00:12:32.937 "min_latency_us": 2974.1236947791162, 00:12:32.937 "max_latency_us": 170972.6329317269 00:12:32.937 } 00:12:32.937 ], 00:12:32.937 "core_count": 1 00:12:32.937 } 00:12:32.937 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63459 00:12:32.937 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 63459 ']' 00:12:32.937 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 63459 00:12:32.937 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:12:32.937 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:32.937 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63459 00:12:32.937 killing process with pid 63459 00:12:32.937 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:32.937 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:32.937 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63459' 00:12:32.937 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 63459 00:12:32.937 Received shutdown signal, test time was about 10.000000 seconds 00:12:32.937 00:12:32.937 Latency(us) 00:12:32.937 [2024-12-09T09:23:10.660Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:32.937 [2024-12-09T09:23:10.660Z] =================================================================================================================== 00:12:32.937 [2024-12-09T09:23:10.660Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:32.937 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 63459 00:12:33.196 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:12:33.456 09:23:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:33.714 09:23:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:33.714 09:23:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db5a41e3-a163-4977-8e9b-668dc2ef9b8b 00:12:33.973 09:23:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:33.973 09:23:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:12:33.973 09:23:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 63112 00:12:33.973 09:23:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 63112 00:12:33.973 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 63112 Killed "${NVMF_APP[@]}" "$@" 00:12:33.973 09:23:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:12:33.973 09:23:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:12:33.973 09:23:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:33.973 09:23:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:33.973 09:23:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:33.973 09:23:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=63614 00:12:33.973 09:23:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 63614 00:12:33.973 09:23:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:33.973 09:23:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 63614 ']' 00:12:33.973 09:23:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.973 09:23:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:33.973 09:23:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.973 09:23:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:33.973 09:23:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:33.973 [2024-12-09 09:23:11.557629] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:12:33.973 [2024-12-09 09:23:11.557707] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:34.233 [2024-12-09 09:23:11.711607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:34.233 [2024-12-09 09:23:11.760882] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:34.233 [2024-12-09 09:23:11.760936] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:34.233 [2024-12-09 09:23:11.760946] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:34.233 [2024-12-09 09:23:11.760954] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:34.233 [2024-12-09 09:23:11.760961] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:34.233 [2024-12-09 09:23:11.761233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.233 [2024-12-09 09:23:11.802499] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:34.801 09:23:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:34.801 09:23:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:12:34.801 09:23:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:34.801 09:23:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:34.801 09:23:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:34.801 09:23:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:34.801 09:23:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:35.060 [2024-12-09 09:23:12.694262] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:12:35.060 [2024-12-09 09:23:12.694757] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:12:35.060 [2024-12-09 09:23:12.695009] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:12:35.060 09:23:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:12:35.060 09:23:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev f5893704-0ac1-4cbd-b2c5-464b456f74f0 00:12:35.060 09:23:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=f5893704-0ac1-4cbd-b2c5-464b456f74f0 00:12:35.060 09:23:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:35.060 09:23:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:12:35.060 09:23:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:35.060 09:23:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:35.060 09:23:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:35.320 09:23:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f5893704-0ac1-4cbd-b2c5-464b456f74f0 -t 2000 00:12:35.579 [ 00:12:35.579 { 00:12:35.579 "name": "f5893704-0ac1-4cbd-b2c5-464b456f74f0", 00:12:35.579 "aliases": [ 00:12:35.579 "lvs/lvol" 00:12:35.579 ], 00:12:35.579 "product_name": "Logical Volume", 00:12:35.579 "block_size": 4096, 00:12:35.579 "num_blocks": 38912, 00:12:35.579 "uuid": "f5893704-0ac1-4cbd-b2c5-464b456f74f0", 00:12:35.579 "assigned_rate_limits": { 00:12:35.579 "rw_ios_per_sec": 0, 00:12:35.579 "rw_mbytes_per_sec": 0, 00:12:35.579 "r_mbytes_per_sec": 0, 00:12:35.579 "w_mbytes_per_sec": 0 00:12:35.579 }, 00:12:35.579 "claimed": false, 00:12:35.579 "zoned": false, 00:12:35.579 "supported_io_types": { 00:12:35.579 "read": true, 00:12:35.579 "write": true, 00:12:35.579 "unmap": true, 00:12:35.579 "flush": false, 00:12:35.579 "reset": true, 00:12:35.579 "nvme_admin": false, 00:12:35.579 "nvme_io": false, 00:12:35.579 "nvme_io_md": false, 00:12:35.579 "write_zeroes": true, 00:12:35.579 "zcopy": false, 00:12:35.579 "get_zone_info": false, 00:12:35.579 "zone_management": false, 00:12:35.579 "zone_append": false, 00:12:35.579 "compare": false, 00:12:35.579 "compare_and_write": false, 00:12:35.579 "abort": false, 00:12:35.579 "seek_hole": true, 00:12:35.579 "seek_data": true, 00:12:35.579 "copy": false, 00:12:35.579 "nvme_iov_md": false 00:12:35.579 }, 00:12:35.579 "driver_specific": { 00:12:35.579 "lvol": { 00:12:35.579 "lvol_store_uuid": "db5a41e3-a163-4977-8e9b-668dc2ef9b8b", 00:12:35.579 "base_bdev": "aio_bdev", 00:12:35.579 "thin_provision": false, 00:12:35.579 "num_allocated_clusters": 38, 00:12:35.579 "snapshot": false, 00:12:35.579 "clone": false, 00:12:35.579 "esnap_clone": false 00:12:35.579 } 00:12:35.579 } 00:12:35.579 } 00:12:35.579 ] 00:12:35.579 09:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:12:35.579 09:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db5a41e3-a163-4977-8e9b-668dc2ef9b8b 00:12:35.579 09:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:12:35.839 09:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:12:35.839 09:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db5a41e3-a163-4977-8e9b-668dc2ef9b8b 00:12:35.839 09:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:12:36.098 09:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:12:36.098 09:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:36.098 [2024-12-09 09:23:13.818181] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:36.356 09:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db5a41e3-a163-4977-8e9b-668dc2ef9b8b 00:12:36.356 09:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:12:36.356 09:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db5a41e3-a163-4977-8e9b-668dc2ef9b8b 00:12:36.356 09:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:36.356 09:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:36.356 09:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:36.356 09:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:36.356 09:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:36.356 09:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:36.356 09:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:36.356 09:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:36.356 09:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db5a41e3-a163-4977-8e9b-668dc2ef9b8b 00:12:36.356 request: 00:12:36.356 { 00:12:36.356 "uuid": "db5a41e3-a163-4977-8e9b-668dc2ef9b8b", 00:12:36.356 "method": "bdev_lvol_get_lvstores", 00:12:36.356 "req_id": 1 00:12:36.356 } 00:12:36.356 Got JSON-RPC error response 00:12:36.356 response: 00:12:36.356 { 00:12:36.356 "code": -19, 00:12:36.356 "message": "No such device" 00:12:36.356 } 00:12:36.615 09:23:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:12:36.615 09:23:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:36.615 09:23:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:36.615 09:23:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:36.615 09:23:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:36.615 aio_bdev 00:12:36.615 09:23:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f5893704-0ac1-4cbd-b2c5-464b456f74f0 00:12:36.615 09:23:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=f5893704-0ac1-4cbd-b2c5-464b456f74f0 00:12:36.615 09:23:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:36.615 09:23:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:12:36.615 09:23:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:36.615 09:23:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:36.615 09:23:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:36.928 09:23:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f5893704-0ac1-4cbd-b2c5-464b456f74f0 -t 2000 00:12:37.195 [ 00:12:37.195 { 00:12:37.195 "name": "f5893704-0ac1-4cbd-b2c5-464b456f74f0", 00:12:37.195 "aliases": [ 00:12:37.195 "lvs/lvol" 00:12:37.195 ], 00:12:37.195 "product_name": "Logical Volume", 00:12:37.195 "block_size": 4096, 00:12:37.195 "num_blocks": 38912, 00:12:37.195 "uuid": "f5893704-0ac1-4cbd-b2c5-464b456f74f0", 00:12:37.195 "assigned_rate_limits": { 00:12:37.195 "rw_ios_per_sec": 0, 00:12:37.195 "rw_mbytes_per_sec": 0, 00:12:37.195 "r_mbytes_per_sec": 0, 00:12:37.195 "w_mbytes_per_sec": 0 00:12:37.195 }, 00:12:37.195 "claimed": false, 00:12:37.195 "zoned": false, 00:12:37.195 "supported_io_types": { 00:12:37.195 "read": true, 00:12:37.195 "write": true, 00:12:37.195 "unmap": true, 00:12:37.195 "flush": false, 00:12:37.195 "reset": true, 00:12:37.195 "nvme_admin": false, 00:12:37.195 "nvme_io": false, 00:12:37.195 "nvme_io_md": false, 00:12:37.195 "write_zeroes": true, 00:12:37.195 "zcopy": false, 00:12:37.195 "get_zone_info": false, 00:12:37.195 "zone_management": false, 00:12:37.195 "zone_append": false, 00:12:37.195 "compare": false, 00:12:37.195 "compare_and_write": false, 00:12:37.195 "abort": false, 00:12:37.195 "seek_hole": true, 00:12:37.195 "seek_data": true, 00:12:37.195 "copy": false, 00:12:37.195 "nvme_iov_md": false 00:12:37.195 }, 00:12:37.195 "driver_specific": { 00:12:37.195 "lvol": { 00:12:37.195 "lvol_store_uuid": "db5a41e3-a163-4977-8e9b-668dc2ef9b8b", 00:12:37.195 "base_bdev": "aio_bdev", 00:12:37.195 "thin_provision": false, 00:12:37.195 "num_allocated_clusters": 38, 00:12:37.195 "snapshot": false, 00:12:37.195 "clone": false, 00:12:37.195 "esnap_clone": false 00:12:37.195 } 00:12:37.195 } 00:12:37.195 } 00:12:37.195 ] 00:12:37.195 09:23:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:12:37.195 09:23:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db5a41e3-a163-4977-8e9b-668dc2ef9b8b 00:12:37.196 09:23:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:37.455 09:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:37.455 09:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:37.455 09:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db5a41e3-a163-4977-8e9b-668dc2ef9b8b 00:12:37.713 09:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:37.713 09:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete f5893704-0ac1-4cbd-b2c5-464b456f74f0 00:12:37.713 09:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u db5a41e3-a163-4977-8e9b-668dc2ef9b8b 00:12:37.972 09:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:38.231 09:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:38.799 ************************************ 00:12:38.799 END TEST lvs_grow_dirty 00:12:38.799 ************************************ 00:12:38.799 00:12:38.799 real 0m19.570s 00:12:38.799 user 0m39.715s 00:12:38.799 sys 0m8.003s 00:12:38.799 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:38.799 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:38.799 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:12:38.799 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:12:38.799 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:12:38.799 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:12:38.799 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:38.799 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:12:38.799 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:12:38.799 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:12:38.799 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:38.799 nvmf_trace.0 00:12:38.799 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:12:38.799 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:12:38.799 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:38.799 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:12:39.366 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:39.366 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:12:39.366 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:39.366 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:39.366 rmmod nvme_tcp 00:12:39.366 rmmod nvme_fabrics 00:12:39.366 rmmod nvme_keyring 00:12:39.366 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:39.366 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:12:39.366 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:12:39.366 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 63614 ']' 00:12:39.366 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 63614 00:12:39.366 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 63614 ']' 00:12:39.366 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 63614 00:12:39.366 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:12:39.366 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:39.366 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63614 00:12:39.366 killing process with pid 63614 00:12:39.366 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:39.366 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:39.366 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63614' 00:12:39.366 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 63614 00:12:39.366 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 63614 00:12:39.624 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:39.624 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:39.624 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:39.624 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:12:39.624 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:39.624 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:12:39.624 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:12:39.624 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:39.624 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:39.624 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:39.624 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:39.624 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:39.624 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:39.624 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:39.882 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:39.882 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:39.882 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:39.882 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:39.882 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:39.882 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:39.882 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:39.882 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:39.882 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:39.882 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:39.882 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:39.882 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.882 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:12:39.882 00:12:39.882 real 0m41.080s 00:12:39.882 user 1m2.168s 00:12:39.882 sys 0m12.635s 00:12:39.882 ************************************ 00:12:39.882 END TEST nvmf_lvs_grow 00:12:39.882 ************************************ 00:12:39.882 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:39.882 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:40.140 09:23:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:40.140 09:23:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:40.140 09:23:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:40.140 09:23:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:40.140 ************************************ 00:12:40.140 START TEST nvmf_bdev_io_wait 00:12:40.141 ************************************ 00:12:40.141 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:40.141 * Looking for test storage... 00:12:40.141 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:40.141 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:40.141 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:12:40.141 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:40.141 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:40.141 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:40.141 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:40.141 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:40.141 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:12:40.141 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:12:40.141 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:12:40.141 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:12:40.141 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:12:40.141 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:12:40.141 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:12:40.141 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:40.141 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:12:40.141 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:12:40.141 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:40.141 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:40.141 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:12:40.141 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:12:40.141 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:40.141 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:12:40.141 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:12:40.141 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:12:40.141 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:12:40.141 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:40.141 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:12:40.141 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:12:40.141 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:40.141 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:40.141 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:12:40.141 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:40.141 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:40.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.141 --rc genhtml_branch_coverage=1 00:12:40.141 --rc genhtml_function_coverage=1 00:12:40.141 --rc genhtml_legend=1 00:12:40.141 --rc geninfo_all_blocks=1 00:12:40.141 --rc geninfo_unexecuted_blocks=1 00:12:40.141 00:12:40.141 ' 00:12:40.141 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:40.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.141 --rc genhtml_branch_coverage=1 00:12:40.141 --rc genhtml_function_coverage=1 00:12:40.141 --rc genhtml_legend=1 00:12:40.141 --rc geninfo_all_blocks=1 00:12:40.141 --rc geninfo_unexecuted_blocks=1 00:12:40.141 00:12:40.141 ' 00:12:40.141 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:40.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.141 --rc genhtml_branch_coverage=1 00:12:40.141 --rc genhtml_function_coverage=1 00:12:40.141 --rc genhtml_legend=1 00:12:40.141 --rc geninfo_all_blocks=1 00:12:40.141 --rc geninfo_unexecuted_blocks=1 00:12:40.141 00:12:40.141 ' 00:12:40.141 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:40.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.141 --rc genhtml_branch_coverage=1 00:12:40.141 --rc genhtml_function_coverage=1 00:12:40.141 --rc genhtml_legend=1 00:12:40.141 --rc geninfo_all_blocks=1 00:12:40.141 --rc geninfo_unexecuted_blocks=1 00:12:40.141 00:12:40.141 ' 00:12:40.141 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:40.141 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:12:40.401 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:40.401 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:40.401 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:40.401 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:40.401 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:40.401 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:40.401 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:40.401 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:40.401 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:40.401 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:40.401 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:12:40.401 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:12:40.401 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:40.401 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:40.401 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:40.401 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:40.401 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:40.401 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:12:40.401 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:40.401 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:40.401 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:40.401 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.401 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.401 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.401 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:12:40.401 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.401 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:12:40.401 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:40.401 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:40.401 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:40.401 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:40.401 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:40.401 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:40.401 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:40.401 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:40.401 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:40.401 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:40.401 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:40.401 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:40.401 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:12:40.401 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:40.402 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:40.402 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:40.402 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:40.402 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:40.402 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:40.402 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:40.402 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:40.402 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:40.402 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:40.402 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:40.402 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:40.402 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:40.402 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:40.402 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:40.402 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:40.402 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:40.402 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:40.402 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:40.402 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:40.402 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:40.402 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:40.402 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:40.402 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:40.402 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:40.402 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:40.402 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:40.402 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:40.402 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:40.402 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:40.402 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:40.402 Cannot find device "nvmf_init_br" 00:12:40.402 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:12:40.402 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:40.402 Cannot find device "nvmf_init_br2" 00:12:40.402 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:12:40.402 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:40.402 Cannot find device "nvmf_tgt_br" 00:12:40.402 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:12:40.402 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:40.402 Cannot find device "nvmf_tgt_br2" 00:12:40.402 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:12:40.402 09:23:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:40.402 Cannot find device "nvmf_init_br" 00:12:40.402 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:12:40.402 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:40.402 Cannot find device "nvmf_init_br2" 00:12:40.402 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:12:40.402 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:40.402 Cannot find device "nvmf_tgt_br" 00:12:40.402 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:12:40.402 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:40.402 Cannot find device "nvmf_tgt_br2" 00:12:40.402 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:12:40.402 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:40.402 Cannot find device "nvmf_br" 00:12:40.402 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:12:40.402 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:40.402 Cannot find device "nvmf_init_if" 00:12:40.402 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:12:40.402 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:40.661 Cannot find device "nvmf_init_if2" 00:12:40.661 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:12:40.661 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:40.661 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:40.661 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:12:40.661 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:40.661 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:40.661 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:12:40.661 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:40.661 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:40.661 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:40.661 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:40.661 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:40.661 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:40.661 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:40.661 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:40.661 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:40.661 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:40.661 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:40.661 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:40.661 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:40.661 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:40.661 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:40.661 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:40.661 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:40.661 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:40.661 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:40.661 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:40.661 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:40.661 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:40.661 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:40.661 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:40.661 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:40.661 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:40.661 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:40.661 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:40.661 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:40.661 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:40.920 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:40.920 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:40.920 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:40.920 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:40.920 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.114 ms 00:12:40.920 00:12:40.920 --- 10.0.0.3 ping statistics --- 00:12:40.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.920 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:12:40.920 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:40.920 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:40.920 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.075 ms 00:12:40.920 00:12:40.920 --- 10.0.0.4 ping statistics --- 00:12:40.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.920 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:12:40.920 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:40.920 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:40.920 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:12:40.920 00:12:40.920 --- 10.0.0.1 ping statistics --- 00:12:40.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.920 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:12:40.920 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:40.920 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:40.920 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:12:40.920 00:12:40.920 --- 10.0.0.2 ping statistics --- 00:12:40.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.920 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:12:40.920 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:40.920 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:12:40.920 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:40.920 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:40.920 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:40.920 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:40.920 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:40.920 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:40.920 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:40.920 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:12:40.920 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:40.920 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:40.920 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:40.920 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=63988 00:12:40.920 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:12:40.920 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 63988 00:12:40.920 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 63988 ']' 00:12:40.920 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.920 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:40.920 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.920 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:40.920 09:23:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:40.920 [2024-12-09 09:23:18.523331] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:12:40.920 [2024-12-09 09:23:18.523398] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:41.178 [2024-12-09 09:23:18.674948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:41.178 [2024-12-09 09:23:18.752470] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:41.178 [2024-12-09 09:23:18.752721] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:41.178 [2024-12-09 09:23:18.752899] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:41.178 [2024-12-09 09:23:18.752949] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:41.178 [2024-12-09 09:23:18.752975] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:41.178 [2024-12-09 09:23:18.754509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:41.178 [2024-12-09 09:23:18.754659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:41.178 [2024-12-09 09:23:18.754757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:41.178 [2024-12-09 09:23:18.754756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.745 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:41.745 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:12:41.745 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:41.745 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:41.745 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:41.745 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:41.745 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:12:41.745 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.745 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:41.745 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.745 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:12:41.745 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.745 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:42.004 [2024-12-09 09:23:19.493228] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:42.004 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.004 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:42.004 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:42.005 [2024-12-09 09:23:19.508638] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:42.005 Malloc0 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:42.005 [2024-12-09 09:23:19.571117] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=64023 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:42.005 { 00:12:42.005 "params": { 00:12:42.005 "name": "Nvme$subsystem", 00:12:42.005 "trtype": "$TEST_TRANSPORT", 00:12:42.005 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:42.005 "adrfam": "ipv4", 00:12:42.005 "trsvcid": "$NVMF_PORT", 00:12:42.005 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:42.005 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:42.005 "hdgst": ${hdgst:-false}, 00:12:42.005 "ddgst": ${ddgst:-false} 00:12:42.005 }, 00:12:42.005 "method": "bdev_nvme_attach_controller" 00:12:42.005 } 00:12:42.005 EOF 00:12:42.005 )") 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=64025 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=64029 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:42.005 { 00:12:42.005 "params": { 00:12:42.005 "name": "Nvme$subsystem", 00:12:42.005 "trtype": "$TEST_TRANSPORT", 00:12:42.005 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:42.005 "adrfam": "ipv4", 00:12:42.005 "trsvcid": "$NVMF_PORT", 00:12:42.005 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:42.005 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:42.005 "hdgst": ${hdgst:-false}, 00:12:42.005 "ddgst": ${ddgst:-false} 00:12:42.005 }, 00:12:42.005 "method": "bdev_nvme_attach_controller" 00:12:42.005 } 00:12:42.005 EOF 00:12:42.005 )") 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:42.005 { 00:12:42.005 "params": { 00:12:42.005 "name": "Nvme$subsystem", 00:12:42.005 "trtype": "$TEST_TRANSPORT", 00:12:42.005 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:42.005 "adrfam": "ipv4", 00:12:42.005 "trsvcid": "$NVMF_PORT", 00:12:42.005 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:42.005 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:42.005 "hdgst": ${hdgst:-false}, 00:12:42.005 "ddgst": ${ddgst:-false} 00:12:42.005 }, 00:12:42.005 "method": "bdev_nvme_attach_controller" 00:12:42.005 } 00:12:42.005 EOF 00:12:42.005 )") 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=64030 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:42.005 "params": { 00:12:42.005 "name": "Nvme1", 00:12:42.005 "trtype": "tcp", 00:12:42.005 "traddr": "10.0.0.3", 00:12:42.005 "adrfam": "ipv4", 00:12:42.005 "trsvcid": "4420", 00:12:42.005 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:42.005 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:42.005 "hdgst": false, 00:12:42.005 "ddgst": false 00:12:42.005 }, 00:12:42.005 "method": "bdev_nvme_attach_controller" 00:12:42.005 }' 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:42.005 { 00:12:42.005 "params": { 00:12:42.005 "name": "Nvme$subsystem", 00:12:42.005 "trtype": "$TEST_TRANSPORT", 00:12:42.005 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:42.005 "adrfam": "ipv4", 00:12:42.005 "trsvcid": "$NVMF_PORT", 00:12:42.005 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:42.005 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:42.005 "hdgst": ${hdgst:-false}, 00:12:42.005 "ddgst": ${ddgst:-false} 00:12:42.005 }, 00:12:42.005 "method": "bdev_nvme_attach_controller" 00:12:42.005 } 00:12:42.005 EOF 00:12:42.005 )") 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:42.005 "params": { 00:12:42.005 "name": "Nvme1", 00:12:42.005 "trtype": "tcp", 00:12:42.005 "traddr": "10.0.0.3", 00:12:42.005 "adrfam": "ipv4", 00:12:42.005 "trsvcid": "4420", 00:12:42.005 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:42.005 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:42.005 "hdgst": false, 00:12:42.005 "ddgst": false 00:12:42.005 }, 00:12:42.005 "method": "bdev_nvme_attach_controller" 00:12:42.005 }' 00:12:42.005 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:42.005 "params": { 00:12:42.005 "name": "Nvme1", 00:12:42.005 "trtype": "tcp", 00:12:42.005 "traddr": "10.0.0.3", 00:12:42.005 "adrfam": "ipv4", 00:12:42.005 "trsvcid": "4420", 00:12:42.005 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:42.005 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:42.005 "hdgst": false, 00:12:42.005 "ddgst": false 00:12:42.005 }, 00:12:42.005 "method": "bdev_nvme_attach_controller" 00:12:42.005 }' 00:12:42.006 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:12:42.006 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:12:42.006 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:12:42.006 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:42.006 "params": { 00:12:42.006 "name": "Nvme1", 00:12:42.006 "trtype": "tcp", 00:12:42.006 "traddr": "10.0.0.3", 00:12:42.006 "adrfam": "ipv4", 00:12:42.006 "trsvcid": "4420", 00:12:42.006 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:42.006 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:42.006 "hdgst": false, 00:12:42.006 "ddgst": false 00:12:42.006 }, 00:12:42.006 "method": "bdev_nvme_attach_controller" 00:12:42.006 }' 00:12:42.006 [2024-12-09 09:23:19.630190] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:12:42.006 [2024-12-09 09:23:19.631335] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-09 09:23:19.631592] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:12:42.006 [2024-12-09 09:23:19.631654] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:12:42.006 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:12:42.006 [2024-12-09 09:23:19.643729] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:12:42.006 [2024-12-09 09:23:19.643893] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:12:42.006 [2024-12-09 09:23:19.648572] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:12:42.006 [2024-12-09 09:23:19.648782] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:12:42.006 09:23:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 64023 00:12:42.265 [2024-12-09 09:23:19.837719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.265 [2024-12-09 09:23:19.879702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:12:42.265 [2024-12-09 09:23:19.891677] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:42.265 [2024-12-09 09:23:19.900061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.265 [2024-12-09 09:23:19.944218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:12:42.265 [2024-12-09 09:23:19.956079] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:42.265 [2024-12-09 09:23:19.975297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.554 Running I/O for 1 seconds... 00:12:42.554 [2024-12-09 09:23:20.027222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.554 [2024-12-09 09:23:20.035318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:42.554 [2024-12-09 09:23:20.047318] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:42.554 Running I/O for 1 seconds... 00:12:42.554 [2024-12-09 09:23:20.072291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:12:42.554 [2024-12-09 09:23:20.084123] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:42.554 Running I/O for 1 seconds... 00:12:42.554 Running I/O for 1 seconds... 00:12:43.491 211696.00 IOPS, 826.94 MiB/s 00:12:43.491 Latency(us) 00:12:43.491 [2024-12-09T09:23:21.214Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:43.491 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:12:43.491 Nvme1n1 : 1.00 211309.46 825.43 0.00 0.00 603.05 307.61 1842.38 00:12:43.491 [2024-12-09T09:23:21.214Z] =================================================================================================================== 00:12:43.491 [2024-12-09T09:23:21.214Z] Total : 211309.46 825.43 0.00 0.00 603.05 307.61 1842.38 00:12:43.491 7421.00 IOPS, 28.99 MiB/s 00:12:43.491 Latency(us) 00:12:43.491 [2024-12-09T09:23:21.214Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:43.491 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:12:43.491 Nvme1n1 : 1.02 7368.07 28.78 0.00 0.00 17150.67 7264.23 30530.83 00:12:43.491 [2024-12-09T09:23:21.214Z] =================================================================================================================== 00:12:43.491 [2024-12-09T09:23:21.214Z] Total : 7368.07 28.78 0.00 0.00 17150.67 7264.23 30530.83 00:12:43.491 6979.00 IOPS, 27.26 MiB/s 00:12:43.491 Latency(us) 00:12:43.491 [2024-12-09T09:23:21.214Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:43.491 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:12:43.491 Nvme1n1 : 1.01 7084.96 27.68 0.00 0.00 18008.85 5816.65 37058.11 00:12:43.491 [2024-12-09T09:23:21.214Z] =================================================================================================================== 00:12:43.491 [2024-12-09T09:23:21.214Z] Total : 7084.96 27.68 0.00 0.00 18008.85 5816.65 37058.11 00:12:43.491 10680.00 IOPS, 41.72 MiB/s 00:12:43.491 Latency(us) 00:12:43.491 [2024-12-09T09:23:21.214Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:43.491 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:12:43.491 Nvme1n1 : 1.01 10764.06 42.05 0.00 0.00 11854.76 4869.14 23161.32 00:12:43.491 [2024-12-09T09:23:21.214Z] =================================================================================================================== 00:12:43.491 [2024-12-09T09:23:21.214Z] Total : 10764.06 42.05 0.00 0.00 11854.76 4869.14 23161.32 00:12:43.750 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 64025 00:12:43.750 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 64029 00:12:43.750 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 64030 00:12:43.750 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.750 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.750 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:43.750 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.750 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:12:43.750 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:12:43.750 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:43.750 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:12:43.750 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:43.750 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:12:43.750 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:43.750 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:43.750 rmmod nvme_tcp 00:12:43.750 rmmod nvme_fabrics 00:12:43.750 rmmod nvme_keyring 00:12:43.750 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:43.750 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:12:43.750 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:12:43.750 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 63988 ']' 00:12:43.750 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 63988 00:12:43.750 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 63988 ']' 00:12:43.750 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 63988 00:12:43.750 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:12:43.750 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:43.750 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63988 00:12:44.009 killing process with pid 63988 00:12:44.009 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:44.009 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:44.009 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63988' 00:12:44.009 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 63988 00:12:44.009 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 63988 00:12:44.009 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:44.009 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:44.009 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:44.009 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:12:44.009 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:12:44.009 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:44.009 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:12:44.009 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:44.009 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:44.009 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:44.009 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:44.009 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:44.009 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:44.268 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:44.268 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:44.268 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:44.268 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:44.268 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:44.268 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:44.268 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:44.268 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:44.268 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:44.268 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:44.268 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:44.268 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:44.268 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:44.268 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:12:44.268 00:12:44.268 real 0m4.347s 00:12:44.268 user 0m16.044s 00:12:44.268 sys 0m2.547s 00:12:44.268 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:44.268 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:44.268 ************************************ 00:12:44.268 END TEST nvmf_bdev_io_wait 00:12:44.268 ************************************ 00:12:44.527 09:23:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:44.527 09:23:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:44.527 09:23:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:44.527 09:23:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:44.527 ************************************ 00:12:44.527 START TEST nvmf_queue_depth 00:12:44.527 ************************************ 00:12:44.527 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:44.527 * Looking for test storage... 00:12:44.527 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:44.527 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:44.527 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:12:44.527 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:44.786 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:44.786 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:44.786 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:44.786 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:44.786 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:12:44.786 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:12:44.786 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:12:44.786 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:12:44.786 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:12:44.786 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:12:44.786 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:12:44.786 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:44.786 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:12:44.786 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:12:44.786 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:44.786 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:44.786 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:12:44.786 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:12:44.786 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:44.786 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:12:44.786 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:12:44.786 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:12:44.786 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:12:44.786 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:44.786 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:12:44.787 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:12:44.787 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:44.787 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:44.787 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:12:44.787 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:44.787 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:44.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.787 --rc genhtml_branch_coverage=1 00:12:44.787 --rc genhtml_function_coverage=1 00:12:44.787 --rc genhtml_legend=1 00:12:44.787 --rc geninfo_all_blocks=1 00:12:44.787 --rc geninfo_unexecuted_blocks=1 00:12:44.787 00:12:44.787 ' 00:12:44.787 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:44.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.787 --rc genhtml_branch_coverage=1 00:12:44.787 --rc genhtml_function_coverage=1 00:12:44.787 --rc genhtml_legend=1 00:12:44.787 --rc geninfo_all_blocks=1 00:12:44.787 --rc geninfo_unexecuted_blocks=1 00:12:44.787 00:12:44.787 ' 00:12:44.787 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:44.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.787 --rc genhtml_branch_coverage=1 00:12:44.787 --rc genhtml_function_coverage=1 00:12:44.787 --rc genhtml_legend=1 00:12:44.787 --rc geninfo_all_blocks=1 00:12:44.787 --rc geninfo_unexecuted_blocks=1 00:12:44.787 00:12:44.787 ' 00:12:44.787 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:44.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.787 --rc genhtml_branch_coverage=1 00:12:44.787 --rc genhtml_function_coverage=1 00:12:44.787 --rc genhtml_legend=1 00:12:44.787 --rc geninfo_all_blocks=1 00:12:44.787 --rc geninfo_unexecuted_blocks=1 00:12:44.787 00:12:44.787 ' 00:12:44.787 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:44.787 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:12:44.787 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:44.787 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:44.787 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:44.787 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:44.787 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:44.787 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:44.787 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:44.787 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:44.787 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:44.787 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:44.787 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:12:44.787 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:12:44.787 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:44.787 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:44.787 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:44.787 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:44.787 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:44.787 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:12:44.787 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:44.787 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:44.787 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:44.787 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.787 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.787 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.787 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:12:44.787 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.787 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:12:44.787 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:44.787 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:44.787 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:44.787 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:44.787 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:44.787 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:44.787 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:44.787 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:44.787 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:44.787 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:44.787 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:12:44.787 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:12:44.787 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:44.787 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:12:44.787 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:44.787 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:44.787 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:44.787 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:44.787 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:44.788 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:44.788 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:44.788 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:44.788 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:44.788 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:44.788 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:44.788 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:44.788 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:44.788 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:44.788 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:44.788 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:44.788 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:44.788 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:44.788 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:44.788 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:44.788 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:44.788 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:44.788 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:44.788 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:44.788 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:44.788 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:44.788 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:44.788 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:44.788 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:44.788 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:44.788 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:44.788 Cannot find device "nvmf_init_br" 00:12:44.788 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:12:44.788 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:44.788 Cannot find device "nvmf_init_br2" 00:12:44.788 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:12:44.788 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:44.788 Cannot find device "nvmf_tgt_br" 00:12:44.788 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:12:44.788 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:44.788 Cannot find device "nvmf_tgt_br2" 00:12:44.788 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:12:44.788 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:44.788 Cannot find device "nvmf_init_br" 00:12:44.788 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:12:44.788 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:44.788 Cannot find device "nvmf_init_br2" 00:12:44.788 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:12:44.788 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:44.788 Cannot find device "nvmf_tgt_br" 00:12:44.788 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:12:44.788 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:44.788 Cannot find device "nvmf_tgt_br2" 00:12:44.788 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:12:44.788 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:44.788 Cannot find device "nvmf_br" 00:12:44.788 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:12:44.788 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:44.788 Cannot find device "nvmf_init_if" 00:12:44.788 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:12:44.788 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:45.046 Cannot find device "nvmf_init_if2" 00:12:45.046 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:12:45.046 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:45.046 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:45.046 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:12:45.047 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:45.047 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:45.047 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:12:45.047 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:45.047 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:45.047 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:45.047 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:45.047 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:45.047 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:45.047 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:45.047 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:45.047 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:45.047 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:45.047 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:45.047 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:45.047 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:45.047 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:45.047 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:45.047 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:45.047 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:45.047 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:45.047 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:45.047 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:45.047 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:45.047 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:45.047 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:45.047 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:45.047 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:45.305 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:45.305 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:45.305 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:45.305 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:45.305 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:45.305 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:45.305 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:45.305 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:45.305 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:45.305 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.105 ms 00:12:45.305 00:12:45.305 --- 10.0.0.3 ping statistics --- 00:12:45.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.305 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:12:45.305 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:45.305 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:45.305 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.114 ms 00:12:45.305 00:12:45.305 --- 10.0.0.4 ping statistics --- 00:12:45.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.305 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:12:45.305 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:45.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:45.305 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:12:45.305 00:12:45.305 --- 10.0.0.1 ping statistics --- 00:12:45.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.305 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:12:45.306 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:45.306 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:45.306 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:12:45.306 00:12:45.306 --- 10.0.0.2 ping statistics --- 00:12:45.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.306 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:12:45.306 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:45.306 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:12:45.306 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:45.306 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:45.306 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:45.306 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:45.306 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:45.306 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:45.306 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:45.306 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:12:45.306 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:45.306 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:45.306 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:45.306 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=64288 00:12:45.306 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:45.306 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 64288 00:12:45.306 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 64288 ']' 00:12:45.306 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.306 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:45.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.306 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.306 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:45.306 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:45.306 [2024-12-09 09:23:22.936519] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:12:45.306 [2024-12-09 09:23:22.936589] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:45.565 [2024-12-09 09:23:23.091773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.565 [2024-12-09 09:23:23.141628] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:45.565 [2024-12-09 09:23:23.141677] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:45.565 [2024-12-09 09:23:23.141687] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:45.565 [2024-12-09 09:23:23.141695] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:45.565 [2024-12-09 09:23:23.141702] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:45.565 [2024-12-09 09:23:23.141984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:45.565 [2024-12-09 09:23:23.182635] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:45.565 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:45.565 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:12:45.565 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:45.565 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:45.565 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:45.824 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:45.824 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:45.824 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.824 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:45.824 [2024-12-09 09:23:23.307613] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:45.824 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.824 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:45.824 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.824 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:45.824 Malloc0 00:12:45.824 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.824 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:45.824 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.824 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:45.824 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.824 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:45.824 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.824 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:45.824 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.824 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:45.824 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.824 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:45.824 [2024-12-09 09:23:23.354087] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:45.824 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.824 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=64312 00:12:45.824 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:45.824 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 64312 /var/tmp/bdevperf.sock 00:12:45.824 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 64312 ']' 00:12:45.824 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:45.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:45.824 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:45.824 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:45.824 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:45.824 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:45.824 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:12:45.824 [2024-12-09 09:23:23.411828] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:12:45.824 [2024-12-09 09:23:23.411891] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64312 ] 00:12:46.082 [2024-12-09 09:23:23.561076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:46.082 [2024-12-09 09:23:23.603858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.082 [2024-12-09 09:23:23.644971] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:46.648 09:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:46.648 09:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:12:46.648 09:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:46.648 09:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.648 09:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:46.906 NVMe0n1 00:12:46.906 09:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.906 09:23:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:46.906 Running I/O for 10 seconds... 00:12:48.809 9172.00 IOPS, 35.83 MiB/s [2024-12-09T09:23:27.907Z] 9216.00 IOPS, 36.00 MiB/s [2024-12-09T09:23:28.841Z] 9230.67 IOPS, 36.06 MiB/s [2024-12-09T09:23:29.777Z] 9278.00 IOPS, 36.24 MiB/s [2024-12-09T09:23:30.733Z] 9304.20 IOPS, 36.34 MiB/s [2024-12-09T09:23:31.667Z] 9398.00 IOPS, 36.71 MiB/s [2024-12-09T09:23:32.601Z] 9500.14 IOPS, 37.11 MiB/s [2024-12-09T09:23:33.534Z] 9609.00 IOPS, 37.54 MiB/s [2024-12-09T09:23:34.909Z] 9684.33 IOPS, 37.83 MiB/s [2024-12-09T09:23:34.909Z] 9737.40 IOPS, 38.04 MiB/s 00:12:57.186 Latency(us) 00:12:57.186 [2024-12-09T09:23:34.909Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:57.186 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:12:57.186 Verification LBA range: start 0x0 length 0x4000 00:12:57.186 NVMe0n1 : 10.08 9754.64 38.10 0.00 0.00 104569.49 19160.73 70326.18 00:12:57.186 [2024-12-09T09:23:34.909Z] =================================================================================================================== 00:12:57.186 [2024-12-09T09:23:34.909Z] Total : 9754.64 38.10 0.00 0.00 104569.49 19160.73 70326.18 00:12:57.186 { 00:12:57.186 "results": [ 00:12:57.186 { 00:12:57.186 "job": "NVMe0n1", 00:12:57.186 "core_mask": "0x1", 00:12:57.186 "workload": "verify", 00:12:57.186 "status": "finished", 00:12:57.186 "verify_range": { 00:12:57.186 "start": 0, 00:12:57.186 "length": 16384 00:12:57.186 }, 00:12:57.186 "queue_depth": 1024, 00:12:57.186 "io_size": 4096, 00:12:57.186 "runtime": 10.08228, 00:12:57.186 "iops": 9754.638831692831, 00:12:57.186 "mibps": 38.10405793630012, 00:12:57.186 "io_failed": 0, 00:12:57.186 "io_timeout": 0, 00:12:57.186 "avg_latency_us": 104569.49306293491, 00:12:57.186 "min_latency_us": 19160.72610441767, 00:12:57.186 "max_latency_us": 70326.18152610442 00:12:57.186 } 00:12:57.186 ], 00:12:57.186 "core_count": 1 00:12:57.186 } 00:12:57.186 09:23:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 64312 00:12:57.186 09:23:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 64312 ']' 00:12:57.186 09:23:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 64312 00:12:57.186 09:23:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:12:57.186 09:23:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:57.186 09:23:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64312 00:12:57.186 09:23:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:57.186 09:23:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:57.186 killing process with pid 64312 00:12:57.186 09:23:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64312' 00:12:57.186 09:23:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 64312 00:12:57.186 Received shutdown signal, test time was about 10.000000 seconds 00:12:57.186 00:12:57.186 Latency(us) 00:12:57.186 [2024-12-09T09:23:34.909Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:57.186 [2024-12-09T09:23:34.909Z] =================================================================================================================== 00:12:57.186 [2024-12-09T09:23:34.909Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:57.186 09:23:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 64312 00:12:57.186 09:23:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:57.186 09:23:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:12:57.186 09:23:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:57.186 09:23:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:12:57.186 09:23:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:57.186 09:23:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:12:57.186 09:23:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:57.186 09:23:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:57.186 rmmod nvme_tcp 00:12:57.186 rmmod nvme_fabrics 00:12:57.186 rmmod nvme_keyring 00:12:57.186 09:23:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:57.186 09:23:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:12:57.186 09:23:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:12:57.186 09:23:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 64288 ']' 00:12:57.186 09:23:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 64288 00:12:57.186 09:23:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 64288 ']' 00:12:57.186 09:23:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 64288 00:12:57.186 09:23:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:12:57.186 09:23:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:57.445 09:23:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64288 00:12:57.445 09:23:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:57.445 09:23:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:57.445 killing process with pid 64288 00:12:57.445 09:23:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64288' 00:12:57.445 09:23:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 64288 00:12:57.445 09:23:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 64288 00:12:57.703 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:57.703 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:57.703 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:57.703 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:12:57.703 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:57.703 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:12:57.703 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:12:57.703 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:57.703 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:57.703 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:57.703 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:57.703 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:57.703 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:57.703 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:57.703 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:57.703 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:57.703 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:57.703 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:57.703 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:57.703 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:57.961 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:57.961 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:57.961 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:57.961 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.961 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:57.961 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:57.961 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:12:57.961 00:12:57.961 real 0m13.471s 00:12:57.961 user 0m22.049s 00:12:57.961 sys 0m3.102s 00:12:57.961 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:57.961 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:57.961 ************************************ 00:12:57.961 END TEST nvmf_queue_depth 00:12:57.961 ************************************ 00:12:57.961 09:23:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:57.961 09:23:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:57.961 09:23:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:57.961 09:23:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:57.961 ************************************ 00:12:57.961 START TEST nvmf_target_multipath 00:12:57.961 ************************************ 00:12:57.961 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:58.221 * Looking for test storage... 00:12:58.221 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:58.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.221 --rc genhtml_branch_coverage=1 00:12:58.221 --rc genhtml_function_coverage=1 00:12:58.221 --rc genhtml_legend=1 00:12:58.221 --rc geninfo_all_blocks=1 00:12:58.221 --rc geninfo_unexecuted_blocks=1 00:12:58.221 00:12:58.221 ' 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:58.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.221 --rc genhtml_branch_coverage=1 00:12:58.221 --rc genhtml_function_coverage=1 00:12:58.221 --rc genhtml_legend=1 00:12:58.221 --rc geninfo_all_blocks=1 00:12:58.221 --rc geninfo_unexecuted_blocks=1 00:12:58.221 00:12:58.221 ' 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:58.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.221 --rc genhtml_branch_coverage=1 00:12:58.221 --rc genhtml_function_coverage=1 00:12:58.221 --rc genhtml_legend=1 00:12:58.221 --rc geninfo_all_blocks=1 00:12:58.221 --rc geninfo_unexecuted_blocks=1 00:12:58.221 00:12:58.221 ' 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:58.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.221 --rc genhtml_branch_coverage=1 00:12:58.221 --rc genhtml_function_coverage=1 00:12:58.221 --rc genhtml_legend=1 00:12:58.221 --rc geninfo_all_blocks=1 00:12:58.221 --rc geninfo_unexecuted_blocks=1 00:12:58.221 00:12:58.221 ' 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.221 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:58.222 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:58.222 Cannot find device "nvmf_init_br" 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:58.222 Cannot find device "nvmf_init_br2" 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:58.222 Cannot find device "nvmf_tgt_br" 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:58.222 Cannot find device "nvmf_tgt_br2" 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:58.222 Cannot find device "nvmf_init_br" 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:12:58.222 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:58.481 Cannot find device "nvmf_init_br2" 00:12:58.481 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:12:58.481 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:58.481 Cannot find device "nvmf_tgt_br" 00:12:58.481 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:12:58.481 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:58.481 Cannot find device "nvmf_tgt_br2" 00:12:58.481 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:12:58.481 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:58.481 Cannot find device "nvmf_br" 00:12:58.481 09:23:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:12:58.481 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:58.481 Cannot find device "nvmf_init_if" 00:12:58.481 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:12:58.481 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:58.481 Cannot find device "nvmf_init_if2" 00:12:58.481 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:12:58.481 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:58.481 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:58.481 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:12:58.481 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:58.481 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:58.481 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:12:58.481 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:58.481 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:58.481 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:58.481 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:58.481 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:58.481 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:58.481 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:58.481 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:58.482 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:58.482 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:58.482 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:58.482 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:58.482 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:58.482 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:58.482 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:58.482 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:58.482 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:58.482 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:58.482 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:58.482 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:58.482 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:58.482 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:58.482 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:58.741 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:58.741 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:58.741 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:58.741 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:58.741 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:58.741 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:58.741 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:58.741 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:58.741 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:58.741 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:58.741 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:58.741 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.121 ms 00:12:58.741 00:12:58.741 --- 10.0.0.3 ping statistics --- 00:12:58.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:58.741 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:12:58.741 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:58.741 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:58.741 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.112 ms 00:12:58.741 00:12:58.741 --- 10.0.0.4 ping statistics --- 00:12:58.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:58.742 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:12:58.742 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:58.742 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:58.742 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:12:58.742 00:12:58.742 --- 10.0.0.1 ping statistics --- 00:12:58.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:58.742 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:12:58.742 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:58.742 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:58.742 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:12:58.742 00:12:58.742 --- 10.0.0.2 ping statistics --- 00:12:58.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:58.742 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:12:58.742 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:58.742 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:12:58.742 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:58.742 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:58.742 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:58.742 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:58.742 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:58.742 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:58.742 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:58.742 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:12:58.742 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:12:58.742 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:12:58.742 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:58.742 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:58.742 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:58.742 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=64690 00:12:58.742 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 64690 00:12:58.742 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 64690 ']' 00:12:58.742 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.742 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:58.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.742 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.742 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:58.742 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:58.742 09:23:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:58.742 [2024-12-09 09:23:36.416750] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:12:58.742 [2024-12-09 09:23:36.416824] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:59.001 [2024-12-09 09:23:36.572673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:59.001 [2024-12-09 09:23:36.620547] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:59.001 [2024-12-09 09:23:36.620611] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:59.001 [2024-12-09 09:23:36.620623] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:59.001 [2024-12-09 09:23:36.620632] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:59.001 [2024-12-09 09:23:36.620639] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:59.001 [2024-12-09 09:23:36.621523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:59.001 [2024-12-09 09:23:36.621708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:59.001 [2024-12-09 09:23:36.621836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.001 [2024-12-09 09:23:36.621785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:59.001 [2024-12-09 09:23:36.664060] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:59.569 09:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:59.569 09:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:12:59.569 09:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:59.569 09:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:59.569 09:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:59.827 09:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:59.827 09:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:59.827 [2024-12-09 09:23:37.534616] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:00.086 09:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:13:00.086 Malloc0 00:13:00.345 09:23:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:13:00.345 09:23:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:00.604 09:23:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:00.863 [2024-12-09 09:23:38.468136] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:00.863 09:23:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:13:01.123 [2024-12-09 09:23:38.692015] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:13:01.123 09:23:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid=e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:13:01.400 09:23:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid=e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:13:01.400 09:23:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:13:01.400 09:23:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:13:01.400 09:23:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:01.400 09:23:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:01.401 09:23:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:13:03.305 09:23:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:03.305 09:23:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:03.305 09:23:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:03.564 09:23:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:03.564 09:23:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:03.564 09:23:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:13:03.564 09:23:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:13:03.564 09:23:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:13:03.564 09:23:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:13:03.564 09:23:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:13:03.564 09:23:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:13:03.564 09:23:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:13:03.564 09:23:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:13:03.564 09:23:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:13:03.564 09:23:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:13:03.564 09:23:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:13:03.564 09:23:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:13:03.564 09:23:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:13:03.564 09:23:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:13:03.564 09:23:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:13:03.564 09:23:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:13:03.564 09:23:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:03.564 09:23:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:03.564 09:23:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:03.564 09:23:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:13:03.564 09:23:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:13:03.564 09:23:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:13:03.564 09:23:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:03.564 09:23:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:03.564 09:23:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:03.564 09:23:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:13:03.564 09:23:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:13:03.564 09:23:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=64780 00:13:03.564 09:23:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:13:03.564 09:23:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:13:03.564 [global] 00:13:03.564 thread=1 00:13:03.564 invalidate=1 00:13:03.564 rw=randrw 00:13:03.564 time_based=1 00:13:03.564 runtime=6 00:13:03.564 ioengine=libaio 00:13:03.564 direct=1 00:13:03.564 bs=4096 00:13:03.564 iodepth=128 00:13:03.564 norandommap=0 00:13:03.564 numjobs=1 00:13:03.564 00:13:03.564 verify_dump=1 00:13:03.564 verify_backlog=512 00:13:03.564 verify_state_save=0 00:13:03.564 do_verify=1 00:13:03.564 verify=crc32c-intel 00:13:03.564 [job0] 00:13:03.564 filename=/dev/nvme0n1 00:13:03.564 Could not set queue depth (nvme0n1) 00:13:03.823 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:03.823 fio-3.35 00:13:03.823 Starting 1 thread 00:13:04.425 09:23:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:13:04.684 09:23:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:13:04.941 09:23:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:13:04.941 09:23:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:13:04.941 09:23:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:04.941 09:23:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:04.941 09:23:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:04.941 09:23:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:13:04.941 09:23:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:13:04.941 09:23:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:13:04.942 09:23:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:04.942 09:23:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:04.942 09:23:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:04.942 09:23:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:13:04.942 09:23:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:13:05.200 09:23:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:13:05.200 09:23:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:13:05.200 09:23:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:13:05.200 09:23:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:05.200 09:23:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:05.200 09:23:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:05.200 09:23:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:13:05.200 09:23:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:13:05.200 09:23:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:13:05.200 09:23:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:05.200 09:23:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:05.200 09:23:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:05.200 09:23:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:13:05.200 09:23:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 64780 00:13:10.469 00:13:10.469 job0: (groupid=0, jobs=1): err= 0: pid=64801: Mon Dec 9 09:23:47 2024 00:13:10.469 read: IOPS=13.2k, BW=51.6MiB/s (54.1MB/s)(310MiB/6004msec) 00:13:10.469 slat (usec): min=3, max=5063, avg=41.58, stdev=154.36 00:13:10.469 clat (usec): min=613, max=16162, avg=6695.24, stdev=1289.10 00:13:10.469 lat (usec): min=636, max=16175, avg=6736.82, stdev=1294.73 00:13:10.469 clat percentiles (usec): 00:13:10.469 | 1.00th=[ 3851], 5.00th=[ 4686], 10.00th=[ 5342], 20.00th=[ 5932], 00:13:10.469 | 30.00th=[ 6194], 40.00th=[ 6390], 50.00th=[ 6587], 60.00th=[ 6783], 00:13:10.469 | 70.00th=[ 6980], 80.00th=[ 7373], 90.00th=[ 8029], 95.00th=[ 9503], 00:13:10.469 | 99.00th=[10683], 99.50th=[11338], 99.90th=[13042], 99.95th=[13304], 00:13:10.469 | 99.99th=[14222] 00:13:10.469 bw ( KiB/s): min=15584, max=35688, per=51.15%, avg=27027.00, stdev=7239.00, samples=11 00:13:10.469 iops : min= 3896, max= 8922, avg=6756.73, stdev=1809.74, samples=11 00:13:10.469 write: IOPS=7582, BW=29.6MiB/s (31.1MB/s)(157MiB/5299msec); 0 zone resets 00:13:10.469 slat (usec): min=4, max=2499, avg=54.27, stdev=98.04 00:13:10.469 clat (usec): min=675, max=12462, avg=5708.41, stdev=1101.05 00:13:10.469 lat (usec): min=712, max=12487, avg=5762.68, stdev=1103.70 00:13:10.469 clat percentiles (usec): 00:13:10.469 | 1.00th=[ 3064], 5.00th=[ 3884], 10.00th=[ 4293], 20.00th=[ 4948], 00:13:10.469 | 30.00th=[ 5342], 40.00th=[ 5604], 50.00th=[ 5800], 60.00th=[ 5932], 00:13:10.469 | 70.00th=[ 6128], 80.00th=[ 6390], 90.00th=[ 6783], 95.00th=[ 7308], 00:13:10.469 | 99.00th=[ 9110], 99.50th=[10028], 99.90th=[11338], 99.95th=[11600], 00:13:10.469 | 99.99th=[12256] 00:13:10.469 bw ( KiB/s): min=16384, max=35272, per=88.95%, avg=26976.64, stdev=6863.63, samples=11 00:13:10.469 iops : min= 4096, max= 8818, avg=6744.09, stdev=1715.85, samples=11 00:13:10.469 lat (usec) : 750=0.01%, 1000=0.01% 00:13:10.469 lat (msec) : 2=0.15%, 4=2.76%, 10=95.11%, 20=1.97% 00:13:10.469 cpu : usr=7.28%, sys=31.76%, ctx=7483, majf=0, minf=90 00:13:10.469 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:13:10.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.469 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:10.469 issued rwts: total=79306,40178,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:10.469 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:10.469 00:13:10.469 Run status group 0 (all jobs): 00:13:10.469 READ: bw=51.6MiB/s (54.1MB/s), 51.6MiB/s-51.6MiB/s (54.1MB/s-54.1MB/s), io=310MiB (325MB), run=6004-6004msec 00:13:10.469 WRITE: bw=29.6MiB/s (31.1MB/s), 29.6MiB/s-29.6MiB/s (31.1MB/s-31.1MB/s), io=157MiB (165MB), run=5299-5299msec 00:13:10.469 00:13:10.469 Disk stats (read/write): 00:13:10.469 nvme0n1: ios=78439/39478, merge=0/0, ticks=477113/194311, in_queue=671424, util=98.68% 00:13:10.469 09:23:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:13:10.469 09:23:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:13:10.469 09:23:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:13:10.469 09:23:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:13:10.469 09:23:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:10.469 09:23:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:10.469 09:23:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:10.469 09:23:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:13:10.469 09:23:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:13:10.469 09:23:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:13:10.469 09:23:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:10.469 09:23:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:10.469 09:23:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:10.469 09:23:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:13:10.469 09:23:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:13:10.469 09:23:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=64880 00:13:10.469 09:23:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:13:10.469 09:23:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:13:10.469 [global] 00:13:10.469 thread=1 00:13:10.469 invalidate=1 00:13:10.469 rw=randrw 00:13:10.469 time_based=1 00:13:10.469 runtime=6 00:13:10.469 ioengine=libaio 00:13:10.469 direct=1 00:13:10.469 bs=4096 00:13:10.469 iodepth=128 00:13:10.469 norandommap=0 00:13:10.469 numjobs=1 00:13:10.469 00:13:10.469 verify_dump=1 00:13:10.469 verify_backlog=512 00:13:10.469 verify_state_save=0 00:13:10.469 do_verify=1 00:13:10.469 verify=crc32c-intel 00:13:10.469 [job0] 00:13:10.469 filename=/dev/nvme0n1 00:13:10.469 Could not set queue depth (nvme0n1) 00:13:10.469 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:10.469 fio-3.35 00:13:10.469 Starting 1 thread 00:13:11.404 09:23:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:13:11.662 09:23:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:13:11.662 09:23:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:13:11.662 09:23:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:13:11.662 09:23:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:11.662 09:23:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:11.662 09:23:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:11.662 09:23:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:13:11.662 09:23:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:13:11.662 09:23:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:13:11.662 09:23:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:11.662 09:23:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:11.662 09:23:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:11.662 09:23:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:13:11.662 09:23:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:13:11.920 09:23:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:13:12.178 09:23:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:13:12.178 09:23:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:13:12.178 09:23:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:12.178 09:23:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:12.178 09:23:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:12.178 09:23:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:13:12.178 09:23:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:13:12.178 09:23:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:13:12.178 09:23:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:12.178 09:23:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:12.178 09:23:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:12.178 09:23:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:13:12.178 09:23:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 64880 00:13:17.441 00:13:17.441 job0: (groupid=0, jobs=1): err= 0: pid=64906: Mon Dec 9 09:23:54 2024 00:13:17.441 read: IOPS=14.3k, BW=55.8MiB/s (58.6MB/s)(335MiB/6005msec) 00:13:17.441 slat (usec): min=3, max=4798, avg=33.96, stdev=124.71 00:13:17.441 clat (usec): min=276, max=13616, avg=6185.19, stdev=1410.76 00:13:17.441 lat (usec): min=292, max=13623, avg=6219.16, stdev=1419.48 00:13:17.441 clat percentiles (usec): 00:13:17.441 | 1.00th=[ 2245], 5.00th=[ 3884], 10.00th=[ 4490], 20.00th=[ 5211], 00:13:17.441 | 30.00th=[ 5800], 40.00th=[ 6128], 50.00th=[ 6325], 60.00th=[ 6456], 00:13:17.441 | 70.00th=[ 6587], 80.00th=[ 6849], 90.00th=[ 7439], 95.00th=[ 8979], 00:13:17.441 | 99.00th=[10290], 99.50th=[10683], 99.90th=[11731], 99.95th=[12256], 00:13:17.441 | 99.99th=[13173] 00:13:17.441 bw ( KiB/s): min=11704, max=46968, per=51.03%, avg=29181.09, stdev=10039.20, samples=11 00:13:17.441 iops : min= 2926, max=11742, avg=7295.27, stdev=2509.80, samples=11 00:13:17.441 write: IOPS=8555, BW=33.4MiB/s (35.0MB/s)(172MiB/5159msec); 0 zone resets 00:13:17.441 slat (usec): min=5, max=4798, avg=46.99, stdev=85.65 00:13:17.441 clat (usec): min=493, max=11800, avg=5228.25, stdev=1330.41 00:13:17.441 lat (usec): min=515, max=11833, avg=5275.24, stdev=1337.54 00:13:17.441 clat percentiles (usec): 00:13:17.441 | 1.00th=[ 1565], 5.00th=[ 2966], 10.00th=[ 3490], 20.00th=[ 4113], 00:13:17.441 | 30.00th=[ 4621], 40.00th=[ 5145], 50.00th=[ 5473], 60.00th=[ 5735], 00:13:17.441 | 70.00th=[ 5932], 80.00th=[ 6128], 90.00th=[ 6456], 95.00th=[ 6980], 00:13:17.441 | 99.00th=[ 8979], 99.50th=[ 9372], 99.90th=[10552], 99.95th=[10945], 00:13:17.441 | 99.99th=[11731] 00:13:17.441 bw ( KiB/s): min=12288, max=45920, per=85.40%, avg=29226.91, stdev=9661.85, samples=11 00:13:17.441 iops : min= 3072, max=11480, avg=7306.73, stdev=2415.46, samples=11 00:13:17.441 lat (usec) : 500=0.02%, 750=0.04%, 1000=0.08% 00:13:17.441 lat (msec) : 2=0.95%, 4=8.70%, 10=89.01%, 20=1.21% 00:13:17.441 cpu : usr=7.13%, sys=33.24%, ctx=8317, majf=0, minf=127 00:13:17.441 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:13:17.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:17.441 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:17.441 issued rwts: total=85844,44137,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:17.441 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:17.441 00:13:17.441 Run status group 0 (all jobs): 00:13:17.441 READ: bw=55.8MiB/s (58.6MB/s), 55.8MiB/s-55.8MiB/s (58.6MB/s-58.6MB/s), io=335MiB (352MB), run=6005-6005msec 00:13:17.441 WRITE: bw=33.4MiB/s (35.0MB/s), 33.4MiB/s-33.4MiB/s (35.0MB/s-35.0MB/s), io=172MiB (181MB), run=5159-5159msec 00:13:17.441 00:13:17.441 Disk stats (read/write): 00:13:17.441 nvme0n1: ios=85018/43046, merge=0/0, ticks=480218/195066, in_queue=675284, util=98.63% 00:13:17.441 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:17.441 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:17.441 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:17.441 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:13:17.441 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:17.441 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:17.441 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:17.441 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:17.441 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:13:17.441 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:17.441 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:13:17.441 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:13:17.441 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:13:17.441 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:13:17.441 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:17.441 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:13:17.441 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:17.441 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:13:17.441 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:17.441 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:17.441 rmmod nvme_tcp 00:13:17.441 rmmod nvme_fabrics 00:13:17.441 rmmod nvme_keyring 00:13:17.441 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:17.441 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:13:17.441 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:13:17.441 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 64690 ']' 00:13:17.441 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 64690 00:13:17.442 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 64690 ']' 00:13:17.442 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 64690 00:13:17.442 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:13:17.442 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:17.442 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64690 00:13:17.442 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:17.442 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:17.442 killing process with pid 64690 00:13:17.442 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64690' 00:13:17.442 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 64690 00:13:17.442 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 64690 00:13:17.442 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:17.442 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:17.442 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:17.442 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:13:17.442 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:17.442 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:13:17.442 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:13:17.442 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:17.442 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:17.442 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:17.442 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:17.442 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:17.442 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:17.442 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:17.442 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:17.442 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:17.442 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:17.442 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:17.442 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:17.442 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:17.442 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:17.442 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:17.442 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:17.442 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.442 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:17.442 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.700 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:13:17.700 00:13:17.700 real 0m19.624s 00:13:17.700 user 1m9.745s 00:13:17.700 sys 0m12.726s 00:13:17.700 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:17.700 ************************************ 00:13:17.700 END TEST nvmf_target_multipath 00:13:17.700 ************************************ 00:13:17.700 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:17.700 09:23:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:17.700 09:23:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:17.700 09:23:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:17.700 09:23:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:17.700 ************************************ 00:13:17.700 START TEST nvmf_zcopy 00:13:17.700 ************************************ 00:13:17.700 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:17.700 * Looking for test storage... 00:13:17.700 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:17.700 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:17.700 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:17.700 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:13:17.959 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:17.959 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:17.959 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:17.959 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:17.959 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:13:17.959 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:13:17.959 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:13:17.959 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:13:17.959 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:13:17.959 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:13:17.959 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:17.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.960 --rc genhtml_branch_coverage=1 00:13:17.960 --rc genhtml_function_coverage=1 00:13:17.960 --rc genhtml_legend=1 00:13:17.960 --rc geninfo_all_blocks=1 00:13:17.960 --rc geninfo_unexecuted_blocks=1 00:13:17.960 00:13:17.960 ' 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:17.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.960 --rc genhtml_branch_coverage=1 00:13:17.960 --rc genhtml_function_coverage=1 00:13:17.960 --rc genhtml_legend=1 00:13:17.960 --rc geninfo_all_blocks=1 00:13:17.960 --rc geninfo_unexecuted_blocks=1 00:13:17.960 00:13:17.960 ' 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:17.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.960 --rc genhtml_branch_coverage=1 00:13:17.960 --rc genhtml_function_coverage=1 00:13:17.960 --rc genhtml_legend=1 00:13:17.960 --rc geninfo_all_blocks=1 00:13:17.960 --rc geninfo_unexecuted_blocks=1 00:13:17.960 00:13:17.960 ' 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:17.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.960 --rc genhtml_branch_coverage=1 00:13:17.960 --rc genhtml_function_coverage=1 00:13:17.960 --rc genhtml_legend=1 00:13:17.960 --rc geninfo_all_blocks=1 00:13:17.960 --rc geninfo_unexecuted_blocks=1 00:13:17.960 00:13:17.960 ' 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:17.960 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:17.960 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:17.961 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:17.961 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:17.961 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:17.961 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:17.961 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:17.961 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:17.961 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:17.961 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:17.961 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:17.961 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:17.961 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:17.961 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:17.961 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:17.961 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:17.961 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:17.961 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:17.961 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:17.961 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:17.961 Cannot find device "nvmf_init_br" 00:13:17.961 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:13:17.961 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:17.961 Cannot find device "nvmf_init_br2" 00:13:17.961 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:13:17.961 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:17.961 Cannot find device "nvmf_tgt_br" 00:13:17.961 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:13:17.961 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:17.961 Cannot find device "nvmf_tgt_br2" 00:13:17.961 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:13:17.961 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:17.961 Cannot find device "nvmf_init_br" 00:13:17.961 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:13:17.961 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:17.961 Cannot find device "nvmf_init_br2" 00:13:17.961 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:13:17.961 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:18.220 Cannot find device "nvmf_tgt_br" 00:13:18.220 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:13:18.220 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:18.220 Cannot find device "nvmf_tgt_br2" 00:13:18.220 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:13:18.220 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:18.220 Cannot find device "nvmf_br" 00:13:18.220 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:13:18.220 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:18.220 Cannot find device "nvmf_init_if" 00:13:18.220 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:13:18.220 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:18.220 Cannot find device "nvmf_init_if2" 00:13:18.220 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:13:18.220 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:18.220 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:18.220 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:13:18.220 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:18.220 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:18.220 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:13:18.220 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:18.220 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:18.220 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:18.220 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:18.220 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:18.220 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:18.220 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:18.220 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:18.220 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:18.220 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:18.220 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:18.220 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:18.220 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:18.220 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:18.220 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:18.220 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:18.220 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:18.480 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:18.480 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:18.480 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:18.480 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:18.480 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:18.480 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:18.480 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:18.480 09:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:18.480 09:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:18.480 09:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:18.480 09:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:18.480 09:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:18.480 09:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:18.480 09:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:18.480 09:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:18.480 09:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:18.480 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:18.480 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.123 ms 00:13:18.480 00:13:18.480 --- 10.0.0.3 ping statistics --- 00:13:18.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:18.480 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:13:18.480 09:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:18.480 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:18.480 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.097 ms 00:13:18.480 00:13:18.480 --- 10.0.0.4 ping statistics --- 00:13:18.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:18.480 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:13:18.480 09:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:18.480 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:18.480 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:13:18.480 00:13:18.480 --- 10.0.0.1 ping statistics --- 00:13:18.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:18.480 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:13:18.480 09:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:18.480 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:18.480 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:13:18.480 00:13:18.480 --- 10.0.0.2 ping statistics --- 00:13:18.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:18.480 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:13:18.480 09:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:18.480 09:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:13:18.480 09:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:18.480 09:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:18.480 09:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:18.480 09:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:18.480 09:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:18.480 09:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:18.480 09:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:18.480 09:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:13:18.480 09:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:18.480 09:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:18.480 09:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:18.480 09:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=65207 00:13:18.480 09:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:18.480 09:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 65207 00:13:18.480 09:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 65207 ']' 00:13:18.480 09:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:18.480 09:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:18.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:18.480 09:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:18.480 09:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:18.480 09:23:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:18.740 [2024-12-09 09:23:56.215886] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:13:18.740 [2024-12-09 09:23:56.215959] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:18.740 [2024-12-09 09:23:56.370198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.740 [2024-12-09 09:23:56.421156] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:18.740 [2024-12-09 09:23:56.421196] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:18.740 [2024-12-09 09:23:56.421206] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:18.740 [2024-12-09 09:23:56.421214] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:18.740 [2024-12-09 09:23:56.421221] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:18.740 [2024-12-09 09:23:56.421477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:18.999 [2024-12-09 09:23:56.463075] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:19.566 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:19.566 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:13:19.566 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:19.566 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:19.566 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:19.566 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:19.566 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:13:19.566 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:13:19.566 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.566 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:19.566 [2024-12-09 09:23:57.159614] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:19.566 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.566 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:19.566 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.566 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:19.566 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.566 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:19.566 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.566 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:19.566 [2024-12-09 09:23:57.179716] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:19.566 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.566 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:13:19.566 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.566 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:19.566 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.566 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:13:19.566 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.566 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:19.566 malloc0 00:13:19.566 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.566 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:19.566 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.566 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:19.566 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.566 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:13:19.566 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:13:19.566 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:13:19.566 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:13:19.566 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:19.567 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:19.567 { 00:13:19.567 "params": { 00:13:19.567 "name": "Nvme$subsystem", 00:13:19.567 "trtype": "$TEST_TRANSPORT", 00:13:19.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:19.567 "adrfam": "ipv4", 00:13:19.567 "trsvcid": "$NVMF_PORT", 00:13:19.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:19.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:19.567 "hdgst": ${hdgst:-false}, 00:13:19.567 "ddgst": ${ddgst:-false} 00:13:19.567 }, 00:13:19.567 "method": "bdev_nvme_attach_controller" 00:13:19.567 } 00:13:19.567 EOF 00:13:19.567 )") 00:13:19.567 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:13:19.567 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:13:19.567 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:13:19.567 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:19.567 "params": { 00:13:19.567 "name": "Nvme1", 00:13:19.567 "trtype": "tcp", 00:13:19.567 "traddr": "10.0.0.3", 00:13:19.567 "adrfam": "ipv4", 00:13:19.567 "trsvcid": "4420", 00:13:19.567 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:19.567 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:19.567 "hdgst": false, 00:13:19.567 "ddgst": false 00:13:19.567 }, 00:13:19.567 "method": "bdev_nvme_attach_controller" 00:13:19.567 }' 00:13:19.567 [2024-12-09 09:23:57.265642] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:13:19.567 [2024-12-09 09:23:57.265706] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65240 ] 00:13:19.825 [2024-12-09 09:23:57.416771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.825 [2024-12-09 09:23:57.461044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.825 [2024-12-09 09:23:57.510477] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:20.084 Running I/O for 10 seconds... 00:13:21.965 8136.00 IOPS, 63.56 MiB/s [2024-12-09T09:24:01.062Z] 8192.50 IOPS, 64.00 MiB/s [2024-12-09T09:24:01.630Z] 8184.00 IOPS, 63.94 MiB/s [2024-12-09T09:24:03.008Z] 8199.00 IOPS, 64.05 MiB/s [2024-12-09T09:24:03.946Z] 8208.60 IOPS, 64.13 MiB/s [2024-12-09T09:24:04.882Z] 8215.33 IOPS, 64.18 MiB/s [2024-12-09T09:24:05.819Z] 8218.71 IOPS, 64.21 MiB/s [2024-12-09T09:24:06.758Z] 8221.00 IOPS, 64.23 MiB/s [2024-12-09T09:24:07.693Z] 8200.78 IOPS, 64.07 MiB/s [2024-12-09T09:24:07.693Z] 8175.80 IOPS, 63.87 MiB/s 00:13:29.970 Latency(us) 00:13:29.970 [2024-12-09T09:24:07.693Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:29.970 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:13:29.970 Verification LBA range: start 0x0 length 0x1000 00:13:29.970 Nvme1n1 : 10.01 8177.30 63.89 0.00 0.00 15609.12 284.58 24845.78 00:13:29.970 [2024-12-09T09:24:07.693Z] =================================================================================================================== 00:13:29.970 [2024-12-09T09:24:07.693Z] Total : 8177.30 63.89 0.00 0.00 15609.12 284.58 24845.78 00:13:30.229 09:24:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=65363 00:13:30.229 09:24:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:13:30.229 09:24:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:30.229 09:24:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:13:30.229 09:24:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:13:30.229 09:24:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:13:30.229 09:24:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:13:30.229 09:24:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:30.229 09:24:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:30.229 { 00:13:30.229 "params": { 00:13:30.229 "name": "Nvme$subsystem", 00:13:30.229 "trtype": "$TEST_TRANSPORT", 00:13:30.229 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:30.229 "adrfam": "ipv4", 00:13:30.229 "trsvcid": "$NVMF_PORT", 00:13:30.229 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:30.229 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:30.229 "hdgst": ${hdgst:-false}, 00:13:30.229 "ddgst": ${ddgst:-false} 00:13:30.229 }, 00:13:30.229 "method": "bdev_nvme_attach_controller" 00:13:30.229 } 00:13:30.229 EOF 00:13:30.229 )") 00:13:30.229 [2024-12-09 09:24:07.791579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.229 [2024-12-09 09:24:07.791614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.229 09:24:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:13:30.229 09:24:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:13:30.229 09:24:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:13:30.229 09:24:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:30.229 "params": { 00:13:30.229 "name": "Nvme1", 00:13:30.229 "trtype": "tcp", 00:13:30.229 "traddr": "10.0.0.3", 00:13:30.229 "adrfam": "ipv4", 00:13:30.229 "trsvcid": "4420", 00:13:30.229 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:30.229 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:30.229 "hdgst": false, 00:13:30.229 "ddgst": false 00:13:30.229 }, 00:13:30.229 "method": "bdev_nvme_attach_controller" 00:13:30.229 }' 00:13:30.229 [2024-12-09 09:24:07.803556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.229 [2024-12-09 09:24:07.803586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.229 [2024-12-09 09:24:07.811542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.229 [2024-12-09 09:24:07.811573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.229 [2024-12-09 09:24:07.823540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.229 [2024-12-09 09:24:07.823570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.229 [2024-12-09 09:24:07.831541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.229 [2024-12-09 09:24:07.831572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.229 [2024-12-09 09:24:07.839689] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:13:30.229 [2024-12-09 09:24:07.839765] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65363 ] 00:13:30.230 [2024-12-09 09:24:07.843537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.230 [2024-12-09 09:24:07.843569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.230 [2024-12-09 09:24:07.851542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.230 [2024-12-09 09:24:07.851572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.230 [2024-12-09 09:24:07.859542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.230 [2024-12-09 09:24:07.859571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.230 [2024-12-09 09:24:07.867540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.230 [2024-12-09 09:24:07.867568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.230 [2024-12-09 09:24:07.875540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.230 [2024-12-09 09:24:07.875569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.230 [2024-12-09 09:24:07.883542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.230 [2024-12-09 09:24:07.883572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.230 [2024-12-09 09:24:07.891543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.230 [2024-12-09 09:24:07.891573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.230 [2024-12-09 09:24:07.899539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.230 [2024-12-09 09:24:07.899567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.230 [2024-12-09 09:24:07.907540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.230 [2024-12-09 09:24:07.907570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.230 [2024-12-09 09:24:07.915540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.230 [2024-12-09 09:24:07.915569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.230 [2024-12-09 09:24:07.927538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.230 [2024-12-09 09:24:07.927569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.230 [2024-12-09 09:24:07.935539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.230 [2024-12-09 09:24:07.935569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.230 [2024-12-09 09:24:07.947537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.230 [2024-12-09 09:24:07.947568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.488 [2024-12-09 09:24:07.955539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.488 [2024-12-09 09:24:07.955568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.488 [2024-12-09 09:24:07.963542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.488 [2024-12-09 09:24:07.963567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.488 [2024-12-09 09:24:07.971541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.489 [2024-12-09 09:24:07.971564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.489 [2024-12-09 09:24:07.983539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.489 [2024-12-09 09:24:07.983564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.489 [2024-12-09 09:24:07.993239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.489 [2024-12-09 09:24:07.995542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.489 [2024-12-09 09:24:07.995565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.489 [2024-12-09 09:24:08.007531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.489 [2024-12-09 09:24:08.007562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.489 [2024-12-09 09:24:08.019510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.489 [2024-12-09 09:24:08.019536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.489 [2024-12-09 09:24:08.031491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.489 [2024-12-09 09:24:08.031517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.489 [2024-12-09 09:24:08.040108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.489 [2024-12-09 09:24:08.043474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.489 [2024-12-09 09:24:08.043499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.489 [2024-12-09 09:24:08.055456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.489 [2024-12-09 09:24:08.055490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.489 [2024-12-09 09:24:08.067443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.489 [2024-12-09 09:24:08.067478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.489 [2024-12-09 09:24:08.079426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.489 [2024-12-09 09:24:08.079456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.489 [2024-12-09 09:24:08.089215] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:30.489 [2024-12-09 09:24:08.091405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.489 [2024-12-09 09:24:08.091429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.489 [2024-12-09 09:24:08.103390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.489 [2024-12-09 09:24:08.103418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.489 [2024-12-09 09:24:08.115368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.489 [2024-12-09 09:24:08.115394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.489 [2024-12-09 09:24:08.127359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.489 [2024-12-09 09:24:08.127389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.489 [2024-12-09 09:24:08.139343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.489 [2024-12-09 09:24:08.139373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.489 [2024-12-09 09:24:08.151331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.489 [2024-12-09 09:24:08.151360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.489 [2024-12-09 09:24:08.163320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.489 [2024-12-09 09:24:08.163348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.489 [2024-12-09 09:24:08.175310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.489 [2024-12-09 09:24:08.175337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.489 [2024-12-09 09:24:08.187298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.489 [2024-12-09 09:24:08.187328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.489 Running I/O for 5 seconds... 00:13:30.489 [2024-12-09 09:24:08.199288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.489 [2024-12-09 09:24:08.199310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.776 [2024-12-09 09:24:08.215564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.776 [2024-12-09 09:24:08.215595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.776 [2024-12-09 09:24:08.227222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.776 [2024-12-09 09:24:08.227254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.776 [2024-12-09 09:24:08.241927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.776 [2024-12-09 09:24:08.241959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.776 [2024-12-09 09:24:08.253253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.776 [2024-12-09 09:24:08.253286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.776 [2024-12-09 09:24:08.268056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.776 [2024-12-09 09:24:08.268088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.776 [2024-12-09 09:24:08.283648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.776 [2024-12-09 09:24:08.283680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.776 [2024-12-09 09:24:08.297812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.776 [2024-12-09 09:24:08.297843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.776 [2024-12-09 09:24:08.311531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.776 [2024-12-09 09:24:08.311564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.776 [2024-12-09 09:24:08.326278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.776 [2024-12-09 09:24:08.326308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.776 [2024-12-09 09:24:08.341992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.776 [2024-12-09 09:24:08.342025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.776 [2024-12-09 09:24:08.356698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.776 [2024-12-09 09:24:08.356729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.776 [2024-12-09 09:24:08.372230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.776 [2024-12-09 09:24:08.372267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.776 [2024-12-09 09:24:08.386179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.776 [2024-12-09 09:24:08.386213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.776 [2024-12-09 09:24:08.400783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.776 [2024-12-09 09:24:08.400815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.776 [2024-12-09 09:24:08.414591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.776 [2024-12-09 09:24:08.414623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.776 [2024-12-09 09:24:08.429571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.776 [2024-12-09 09:24:08.429601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.776 [2024-12-09 09:24:08.444869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.776 [2024-12-09 09:24:08.444900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.776 [2024-12-09 09:24:08.458618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.776 [2024-12-09 09:24:08.458650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.776 [2024-12-09 09:24:08.473306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.776 [2024-12-09 09:24:08.473337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.776 [2024-12-09 09:24:08.488544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.776 [2024-12-09 09:24:08.488575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.035 [2024-12-09 09:24:08.502414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.035 [2024-12-09 09:24:08.502445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.035 [2024-12-09 09:24:08.517343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.035 [2024-12-09 09:24:08.517375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.035 [2024-12-09 09:24:08.533367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.035 [2024-12-09 09:24:08.533397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.035 [2024-12-09 09:24:08.547131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.035 [2024-12-09 09:24:08.547163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.035 [2024-12-09 09:24:08.561944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.035 [2024-12-09 09:24:08.561970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.035 [2024-12-09 09:24:08.578206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.035 [2024-12-09 09:24:08.578244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.035 [2024-12-09 09:24:08.593012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.035 [2024-12-09 09:24:08.593044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.035 [2024-12-09 09:24:08.607259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.035 [2024-12-09 09:24:08.607293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.035 [2024-12-09 09:24:08.621598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.035 [2024-12-09 09:24:08.621633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.035 [2024-12-09 09:24:08.636243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.035 [2024-12-09 09:24:08.636279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.035 [2024-12-09 09:24:08.647168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.035 [2024-12-09 09:24:08.647205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.035 [2024-12-09 09:24:08.662439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.035 [2024-12-09 09:24:08.662490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.035 [2024-12-09 09:24:08.679731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.035 [2024-12-09 09:24:08.679769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.035 [2024-12-09 09:24:08.695860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.035 [2024-12-09 09:24:08.695902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.035 [2024-12-09 09:24:08.706811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.035 [2024-12-09 09:24:08.706848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.035 [2024-12-09 09:24:08.721555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.035 [2024-12-09 09:24:08.721597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.035 [2024-12-09 09:24:08.732754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.035 [2024-12-09 09:24:08.732791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.035 [2024-12-09 09:24:08.748498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.035 [2024-12-09 09:24:08.748534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.294 [2024-12-09 09:24:08.765515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.294 [2024-12-09 09:24:08.765551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.294 [2024-12-09 09:24:08.781473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.294 [2024-12-09 09:24:08.781509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.294 [2024-12-09 09:24:08.795891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.294 [2024-12-09 09:24:08.795929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.294 [2024-12-09 09:24:08.810664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.294 [2024-12-09 09:24:08.810701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.294 [2024-12-09 09:24:08.821356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.294 [2024-12-09 09:24:08.821390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.294 [2024-12-09 09:24:08.836366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.294 [2024-12-09 09:24:08.836400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.294 [2024-12-09 09:24:08.853676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.294 [2024-12-09 09:24:08.853713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.294 [2024-12-09 09:24:08.869608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.294 [2024-12-09 09:24:08.869644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.294 [2024-12-09 09:24:08.886783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.294 [2024-12-09 09:24:08.886818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.294 [2024-12-09 09:24:08.902514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.294 [2024-12-09 09:24:08.902550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.294 [2024-12-09 09:24:08.916988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.294 [2024-12-09 09:24:08.917024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.294 [2024-12-09 09:24:08.931452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.294 [2024-12-09 09:24:08.931499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.294 [2024-12-09 09:24:08.947585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.294 [2024-12-09 09:24:08.947621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.294 [2024-12-09 09:24:08.963972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.294 [2024-12-09 09:24:08.964009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.294 [2024-12-09 09:24:08.981038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.294 [2024-12-09 09:24:08.981077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.294 [2024-12-09 09:24:08.997041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.294 [2024-12-09 09:24:08.997077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.294 [2024-12-09 09:24:09.014316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.294 [2024-12-09 09:24:09.014353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.553 [2024-12-09 09:24:09.030199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.553 [2024-12-09 09:24:09.030245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.553 [2024-12-09 09:24:09.047338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.553 [2024-12-09 09:24:09.047381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.553 [2024-12-09 09:24:09.064238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.553 [2024-12-09 09:24:09.064280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.553 [2024-12-09 09:24:09.081139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.553 [2024-12-09 09:24:09.081178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.553 [2024-12-09 09:24:09.098157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.553 [2024-12-09 09:24:09.098196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.553 [2024-12-09 09:24:09.114978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.553 [2024-12-09 09:24:09.115018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.553 [2024-12-09 09:24:09.132062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.553 [2024-12-09 09:24:09.132102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.553 [2024-12-09 09:24:09.147963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.553 [2024-12-09 09:24:09.148001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.553 [2024-12-09 09:24:09.159003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.553 [2024-12-09 09:24:09.159041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.553 [2024-12-09 09:24:09.173904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.553 [2024-12-09 09:24:09.173942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.553 [2024-12-09 09:24:09.182740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.553 [2024-12-09 09:24:09.182776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.553 [2024-12-09 09:24:09.198693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.553 [2024-12-09 09:24:09.198729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.553 14277.00 IOPS, 111.54 MiB/s [2024-12-09T09:24:09.276Z] [2024-12-09 09:24:09.209927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.553 [2024-12-09 09:24:09.209966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.553 [2024-12-09 09:24:09.224843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.553 [2024-12-09 09:24:09.224879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.553 [2024-12-09 09:24:09.241021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.553 [2024-12-09 09:24:09.241058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.553 [2024-12-09 09:24:09.258502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.553 [2024-12-09 09:24:09.258540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.553 [2024-12-09 09:24:09.274075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.553 [2024-12-09 09:24:09.274112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.811 [2024-12-09 09:24:09.287583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.811 [2024-12-09 09:24:09.287621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.811 [2024-12-09 09:24:09.303346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.811 [2024-12-09 09:24:09.303383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.811 [2024-12-09 09:24:09.319513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.811 [2024-12-09 09:24:09.319551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.811 [2024-12-09 09:24:09.330798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.811 [2024-12-09 09:24:09.330832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.811 [2024-12-09 09:24:09.345370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.811 [2024-12-09 09:24:09.345408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.811 [2024-12-09 09:24:09.356484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.811 [2024-12-09 09:24:09.356524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.811 [2024-12-09 09:24:09.371319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.811 [2024-12-09 09:24:09.371356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.811 [2024-12-09 09:24:09.388379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.811 [2024-12-09 09:24:09.388416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.811 [2024-12-09 09:24:09.404519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.811 [2024-12-09 09:24:09.404557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.811 [2024-12-09 09:24:09.421675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.811 [2024-12-09 09:24:09.421716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.811 [2024-12-09 09:24:09.437741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.811 [2024-12-09 09:24:09.437780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.811 [2024-12-09 09:24:09.448745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.811 [2024-12-09 09:24:09.448783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.811 [2024-12-09 09:24:09.464390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.811 [2024-12-09 09:24:09.464429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.811 [2024-12-09 09:24:09.480822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.811 [2024-12-09 09:24:09.480860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.811 [2024-12-09 09:24:09.497862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.811 [2024-12-09 09:24:09.497900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.811 [2024-12-09 09:24:09.514448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.811 [2024-12-09 09:24:09.514494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.811 [2024-12-09 09:24:09.531213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.811 [2024-12-09 09:24:09.531250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.070 [2024-12-09 09:24:09.547699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.070 [2024-12-09 09:24:09.547739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.070 [2024-12-09 09:24:09.563172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.070 [2024-12-09 09:24:09.563207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.070 [2024-12-09 09:24:09.577061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.071 [2024-12-09 09:24:09.577093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.071 [2024-12-09 09:24:09.585427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.071 [2024-12-09 09:24:09.585457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.071 [2024-12-09 09:24:09.594216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.071 [2024-12-09 09:24:09.594253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.071 [2024-12-09 09:24:09.602956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.071 [2024-12-09 09:24:09.602988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.071 [2024-12-09 09:24:09.611665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.071 [2024-12-09 09:24:09.611696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.071 [2024-12-09 09:24:09.620346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.071 [2024-12-09 09:24:09.620377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.071 [2024-12-09 09:24:09.628908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.071 [2024-12-09 09:24:09.628938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.071 [2024-12-09 09:24:09.637509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.071 [2024-12-09 09:24:09.637539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.071 [2024-12-09 09:24:09.646083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.071 [2024-12-09 09:24:09.646112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.071 [2024-12-09 09:24:09.654743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.071 [2024-12-09 09:24:09.654773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.071 [2024-12-09 09:24:09.663302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.071 [2024-12-09 09:24:09.663331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.071 [2024-12-09 09:24:09.671826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.071 [2024-12-09 09:24:09.671856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.071 [2024-12-09 09:24:09.680483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.071 [2024-12-09 09:24:09.680515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.071 [2024-12-09 09:24:09.689209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.071 [2024-12-09 09:24:09.689241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.071 [2024-12-09 09:24:09.697972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.071 [2024-12-09 09:24:09.698003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.071 [2024-12-09 09:24:09.706530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.071 [2024-12-09 09:24:09.706560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.071 [2024-12-09 09:24:09.715301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.071 [2024-12-09 09:24:09.715333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.071 [2024-12-09 09:24:09.724062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.071 [2024-12-09 09:24:09.724094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.071 [2024-12-09 09:24:09.732981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.071 [2024-12-09 09:24:09.733013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.071 [2024-12-09 09:24:09.745031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.071 [2024-12-09 09:24:09.745063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.071 [2024-12-09 09:24:09.759633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.071 [2024-12-09 09:24:09.759665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.071 [2024-12-09 09:24:09.770811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.071 [2024-12-09 09:24:09.770842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.071 [2024-12-09 09:24:09.785521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.071 [2024-12-09 09:24:09.785550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.330 [2024-12-09 09:24:09.796962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.330 [2024-12-09 09:24:09.796995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.330 [2024-12-09 09:24:09.812063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.330 [2024-12-09 09:24:09.812095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.330 [2024-12-09 09:24:09.828109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.330 [2024-12-09 09:24:09.828144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.330 [2024-12-09 09:24:09.842273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.330 [2024-12-09 09:24:09.842302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.330 [2024-12-09 09:24:09.855948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.330 [2024-12-09 09:24:09.855979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.330 [2024-12-09 09:24:09.870919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.330 [2024-12-09 09:24:09.870950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.330 [2024-12-09 09:24:09.886689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.330 [2024-12-09 09:24:09.886720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.330 [2024-12-09 09:24:09.900807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.330 [2024-12-09 09:24:09.900843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.330 [2024-12-09 09:24:09.915305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.330 [2024-12-09 09:24:09.915337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.330 [2024-12-09 09:24:09.929870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.330 [2024-12-09 09:24:09.929903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.330 [2024-12-09 09:24:09.941537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.330 [2024-12-09 09:24:09.941568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.330 [2024-12-09 09:24:09.956078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.330 [2024-12-09 09:24:09.956110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.330 [2024-12-09 09:24:09.967225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.330 [2024-12-09 09:24:09.967256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.330 [2024-12-09 09:24:09.981515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.330 [2024-12-09 09:24:09.981546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.330 [2024-12-09 09:24:09.996030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.330 [2024-12-09 09:24:09.996075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.330 [2024-12-09 09:24:10.011174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.330 [2024-12-09 09:24:10.011223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.330 [2024-12-09 09:24:10.030915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.330 [2024-12-09 09:24:10.030963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.330 [2024-12-09 09:24:10.046095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.330 [2024-12-09 09:24:10.046153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.588 [2024-12-09 09:24:10.066362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.588 [2024-12-09 09:24:10.066421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.588 [2024-12-09 09:24:10.077450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.588 [2024-12-09 09:24:10.077504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.588 [2024-12-09 09:24:10.096527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.588 [2024-12-09 09:24:10.096574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.588 [2024-12-09 09:24:10.116886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.588 [2024-12-09 09:24:10.116943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.588 [2024-12-09 09:24:10.136498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.588 [2024-12-09 09:24:10.136550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.588 [2024-12-09 09:24:10.156418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.588 [2024-12-09 09:24:10.156485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.588 [2024-12-09 09:24:10.174133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.588 [2024-12-09 09:24:10.174185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.588 [2024-12-09 09:24:10.191029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.588 [2024-12-09 09:24:10.191078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.588 14325.00 IOPS, 111.91 MiB/s [2024-12-09T09:24:10.311Z] [2024-12-09 09:24:10.211293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.588 [2024-12-09 09:24:10.211347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.588 [2024-12-09 09:24:10.228550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.588 [2024-12-09 09:24:10.228597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.588 [2024-12-09 09:24:10.248685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.588 [2024-12-09 09:24:10.248730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.588 [2024-12-09 09:24:10.269079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.588 [2024-12-09 09:24:10.269123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.588 [2024-12-09 09:24:10.284114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.588 [2024-12-09 09:24:10.284155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.588 [2024-12-09 09:24:10.301375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.588 [2024-12-09 09:24:10.301411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.847 [2024-12-09 09:24:10.321109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.847 [2024-12-09 09:24:10.321150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.847 [2024-12-09 09:24:10.340280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.847 [2024-12-09 09:24:10.340314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.847 [2024-12-09 09:24:10.350956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.847 [2024-12-09 09:24:10.350993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.847 [2024-12-09 09:24:10.369761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.847 [2024-12-09 09:24:10.369811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.847 [2024-12-09 09:24:10.389802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.847 [2024-12-09 09:24:10.389849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.847 [2024-12-09 09:24:10.410044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.847 [2024-12-09 09:24:10.410100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.847 [2024-12-09 09:24:10.429428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.847 [2024-12-09 09:24:10.429490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.847 [2024-12-09 09:24:10.446157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.847 [2024-12-09 09:24:10.446213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.847 [2024-12-09 09:24:10.466891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.847 [2024-12-09 09:24:10.466954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.847 [2024-12-09 09:24:10.487449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.847 [2024-12-09 09:24:10.487522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.847 [2024-12-09 09:24:10.505635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.847 [2024-12-09 09:24:10.505695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.847 [2024-12-09 09:24:10.524964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.847 [2024-12-09 09:24:10.525027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.847 [2024-12-09 09:24:10.545643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.847 [2024-12-09 09:24:10.545701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.847 [2024-12-09 09:24:10.563322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.847 [2024-12-09 09:24:10.563374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.106 [2024-12-09 09:24:10.583088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.106 [2024-12-09 09:24:10.583138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.106 [2024-12-09 09:24:10.603076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.106 [2024-12-09 09:24:10.603124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.106 [2024-12-09 09:24:10.622853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.106 [2024-12-09 09:24:10.622908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.106 [2024-12-09 09:24:10.643283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.106 [2024-12-09 09:24:10.643322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.106 [2024-12-09 09:24:10.660867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.106 [2024-12-09 09:24:10.660911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.106 [2024-12-09 09:24:10.677258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.106 [2024-12-09 09:24:10.677301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.106 [2024-12-09 09:24:10.697705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.106 [2024-12-09 09:24:10.697746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.106 [2024-12-09 09:24:10.718213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.106 [2024-12-09 09:24:10.718269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.106 [2024-12-09 09:24:10.728964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.106 [2024-12-09 09:24:10.729000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.106 [2024-12-09 09:24:10.743996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.106 [2024-12-09 09:24:10.744033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.106 [2024-12-09 09:24:10.761440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.106 [2024-12-09 09:24:10.761487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.106 [2024-12-09 09:24:10.781372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.106 [2024-12-09 09:24:10.781409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.106 [2024-12-09 09:24:10.800518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.106 [2024-12-09 09:24:10.800559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.106 [2024-12-09 09:24:10.820725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.106 [2024-12-09 09:24:10.820757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.365 [2024-12-09 09:24:10.840277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.365 [2024-12-09 09:24:10.840319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.365 [2024-12-09 09:24:10.857855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.365 [2024-12-09 09:24:10.857895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.365 [2024-12-09 09:24:10.878052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.365 [2024-12-09 09:24:10.878095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.365 [2024-12-09 09:24:10.896884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.365 [2024-12-09 09:24:10.896931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.365 [2024-12-09 09:24:10.916074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.365 [2024-12-09 09:24:10.916122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.365 [2024-12-09 09:24:10.936487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.365 [2024-12-09 09:24:10.936528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.365 [2024-12-09 09:24:10.955845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.365 [2024-12-09 09:24:10.955885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.365 [2024-12-09 09:24:10.975797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.365 [2024-12-09 09:24:10.975836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.365 [2024-12-09 09:24:10.993492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.365 [2024-12-09 09:24:10.993528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.365 [2024-12-09 09:24:11.010753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.365 [2024-12-09 09:24:11.010785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.365 [2024-12-09 09:24:11.028085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.365 [2024-12-09 09:24:11.028116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.365 [2024-12-09 09:24:11.045862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.365 [2024-12-09 09:24:11.045895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.365 [2024-12-09 09:24:11.060556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.365 [2024-12-09 09:24:11.060589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.365 [2024-12-09 09:24:11.078249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.365 [2024-12-09 09:24:11.078281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.625 [2024-12-09 09:24:11.095834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.625 [2024-12-09 09:24:11.095868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.625 [2024-12-09 09:24:11.113685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.625 [2024-12-09 09:24:11.113717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.625 [2024-12-09 09:24:11.131682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.625 [2024-12-09 09:24:11.131714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.625 [2024-12-09 09:24:11.149527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.625 [2024-12-09 09:24:11.149557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.625 [2024-12-09 09:24:11.167186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.625 [2024-12-09 09:24:11.167217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.625 [2024-12-09 09:24:11.184796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.625 [2024-12-09 09:24:11.184828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.625 14382.67 IOPS, 112.36 MiB/s [2024-12-09T09:24:11.348Z] [2024-12-09 09:24:11.203000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.625 [2024-12-09 09:24:11.203036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.625 [2024-12-09 09:24:11.220928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.625 [2024-12-09 09:24:11.220961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.625 [2024-12-09 09:24:11.238742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.625 [2024-12-09 09:24:11.238787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.625 [2024-12-09 09:24:11.256102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.625 [2024-12-09 09:24:11.256140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.625 [2024-12-09 09:24:11.274109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.625 [2024-12-09 09:24:11.274146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.625 [2024-12-09 09:24:11.292246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.625 [2024-12-09 09:24:11.292285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.625 [2024-12-09 09:24:11.309669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.625 [2024-12-09 09:24:11.309705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.625 [2024-12-09 09:24:11.327630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.625 [2024-12-09 09:24:11.327660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.625 [2024-12-09 09:24:11.345267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.625 [2024-12-09 09:24:11.345297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.885 [2024-12-09 09:24:11.363281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.885 [2024-12-09 09:24:11.363311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.885 [2024-12-09 09:24:11.381428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.885 [2024-12-09 09:24:11.381455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.885 [2024-12-09 09:24:11.399519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.885 [2024-12-09 09:24:11.399550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.885 [2024-12-09 09:24:11.417260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.885 [2024-12-09 09:24:11.417289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.885 [2024-12-09 09:24:11.435126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.885 [2024-12-09 09:24:11.435156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.885 [2024-12-09 09:24:11.450436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.885 [2024-12-09 09:24:11.450477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.885 [2024-12-09 09:24:11.469758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.885 [2024-12-09 09:24:11.469786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.885 [2024-12-09 09:24:11.487725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.885 [2024-12-09 09:24:11.487754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.885 [2024-12-09 09:24:11.505435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.885 [2024-12-09 09:24:11.505475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.885 [2024-12-09 09:24:11.523016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.885 [2024-12-09 09:24:11.523046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.885 [2024-12-09 09:24:11.540730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.885 [2024-12-09 09:24:11.540760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.885 [2024-12-09 09:24:11.555389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.885 [2024-12-09 09:24:11.555418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.885 [2024-12-09 09:24:11.571374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.885 [2024-12-09 09:24:11.571405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.885 [2024-12-09 09:24:11.588613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.885 [2024-12-09 09:24:11.588641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.885 [2024-12-09 09:24:11.605940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.885 [2024-12-09 09:24:11.605971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.145 [2024-12-09 09:24:11.623476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.145 [2024-12-09 09:24:11.623505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.145 [2024-12-09 09:24:11.641296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.145 [2024-12-09 09:24:11.641325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.145 [2024-12-09 09:24:11.658642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.145 [2024-12-09 09:24:11.658671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.145 [2024-12-09 09:24:11.676429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.145 [2024-12-09 09:24:11.676469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.145 [2024-12-09 09:24:11.694477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.145 [2024-12-09 09:24:11.694506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.145 [2024-12-09 09:24:11.712371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.145 [2024-12-09 09:24:11.712403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.145 [2024-12-09 09:24:11.727317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.145 [2024-12-09 09:24:11.727349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.145 [2024-12-09 09:24:11.746119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.145 [2024-12-09 09:24:11.746154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.145 [2024-12-09 09:24:11.763783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.145 [2024-12-09 09:24:11.763816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.145 [2024-12-09 09:24:11.781899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.145 [2024-12-09 09:24:11.781933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.145 [2024-12-09 09:24:11.799617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.145 [2024-12-09 09:24:11.799647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.145 [2024-12-09 09:24:11.814494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.145 [2024-12-09 09:24:11.814524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.145 [2024-12-09 09:24:11.833754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.145 [2024-12-09 09:24:11.833782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.145 [2024-12-09 09:24:11.851493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.145 [2024-12-09 09:24:11.851523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.404 [2024-12-09 09:24:11.868525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.404 [2024-12-09 09:24:11.868554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.404 [2024-12-09 09:24:11.883346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.404 [2024-12-09 09:24:11.883376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.404 [2024-12-09 09:24:11.899035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.404 [2024-12-09 09:24:11.899064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.404 [2024-12-09 09:24:11.916600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.404 [2024-12-09 09:24:11.916632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.404 [2024-12-09 09:24:11.934389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.404 [2024-12-09 09:24:11.934432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.404 [2024-12-09 09:24:11.952269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.405 [2024-12-09 09:24:11.952314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.405 [2024-12-09 09:24:11.970184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.405 [2024-12-09 09:24:11.970243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.405 [2024-12-09 09:24:11.987730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.405 [2024-12-09 09:24:11.987768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.405 [2024-12-09 09:24:12.005897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.405 [2024-12-09 09:24:12.005937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.405 [2024-12-09 09:24:12.023668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.405 [2024-12-09 09:24:12.023705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.405 [2024-12-09 09:24:12.041818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.405 [2024-12-09 09:24:12.041863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.405 [2024-12-09 09:24:12.059382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.405 [2024-12-09 09:24:12.059422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.405 [2024-12-09 09:24:12.077341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.405 [2024-12-09 09:24:12.077384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.405 [2024-12-09 09:24:12.095509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.405 [2024-12-09 09:24:12.095550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.405 [2024-12-09 09:24:12.113174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.405 [2024-12-09 09:24:12.113212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.664 [2024-12-09 09:24:12.130790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.664 [2024-12-09 09:24:12.130825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.664 [2024-12-09 09:24:12.147991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.664 [2024-12-09 09:24:12.148025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.664 [2024-12-09 09:24:12.165926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.664 [2024-12-09 09:24:12.165962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.664 [2024-12-09 09:24:12.184050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.664 [2024-12-09 09:24:12.184083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.664 14838.75 IOPS, 115.93 MiB/s [2024-12-09T09:24:12.387Z] [2024-12-09 09:24:12.202158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.664 [2024-12-09 09:24:12.202198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.664 [2024-12-09 09:24:12.220200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.664 [2024-12-09 09:24:12.220232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.664 [2024-12-09 09:24:12.234910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.664 [2024-12-09 09:24:12.234943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.664 [2024-12-09 09:24:12.250737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.664 [2024-12-09 09:24:12.250767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.664 [2024-12-09 09:24:12.267986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.664 [2024-12-09 09:24:12.268016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.664 [2024-12-09 09:24:12.285580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.664 [2024-12-09 09:24:12.285610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.664 [2024-12-09 09:24:12.302983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.664 [2024-12-09 09:24:12.303013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.664 [2024-12-09 09:24:12.320436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.664 [2024-12-09 09:24:12.320478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.664 [2024-12-09 09:24:12.338445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.664 [2024-12-09 09:24:12.338486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.665 [2024-12-09 09:24:12.356536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.665 [2024-12-09 09:24:12.356566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.665 [2024-12-09 09:24:12.374088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.665 [2024-12-09 09:24:12.374117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.926 [2024-12-09 09:24:12.391493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.926 [2024-12-09 09:24:12.391524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.926 [2024-12-09 09:24:12.409762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.926 [2024-12-09 09:24:12.409788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.926 [2024-12-09 09:24:12.427207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.926 [2024-12-09 09:24:12.427237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.926 [2024-12-09 09:24:12.445521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.926 [2024-12-09 09:24:12.445551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.927 [2024-12-09 09:24:12.459745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.927 [2024-12-09 09:24:12.459774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.927 [2024-12-09 09:24:12.477338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.927 [2024-12-09 09:24:12.477368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.927 [2024-12-09 09:24:12.495042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.927 [2024-12-09 09:24:12.495072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.927 [2024-12-09 09:24:12.509828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.927 [2024-12-09 09:24:12.509858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.927 [2024-12-09 09:24:12.525286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.927 [2024-12-09 09:24:12.525318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.927 [2024-12-09 09:24:12.543234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.927 [2024-12-09 09:24:12.543268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.927 [2024-12-09 09:24:12.561012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.927 [2024-12-09 09:24:12.561044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.927 [2024-12-09 09:24:12.576054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.927 [2024-12-09 09:24:12.576084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.927 [2024-12-09 09:24:12.595348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.927 [2024-12-09 09:24:12.595379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.927 [2024-12-09 09:24:12.613342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.927 [2024-12-09 09:24:12.613371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.927 [2024-12-09 09:24:12.630774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.927 [2024-12-09 09:24:12.630803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.927 [2024-12-09 09:24:12.645678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:34.927 [2024-12-09 09:24:12.645708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.186 [2024-12-09 09:24:12.665439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.186 [2024-12-09 09:24:12.665479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.186 [2024-12-09 09:24:12.682927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.186 [2024-12-09 09:24:12.682957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.186 [2024-12-09 09:24:12.700887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.186 [2024-12-09 09:24:12.700919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.186 [2024-12-09 09:24:12.718440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.186 [2024-12-09 09:24:12.718479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.186 [2024-12-09 09:24:12.736498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.186 [2024-12-09 09:24:12.736528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.186 [2024-12-09 09:24:12.754550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.186 [2024-12-09 09:24:12.754580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.186 [2024-12-09 09:24:12.772363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.186 [2024-12-09 09:24:12.772394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.186 [2024-12-09 09:24:12.790118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.186 [2024-12-09 09:24:12.790149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.186 [2024-12-09 09:24:12.805122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.186 [2024-12-09 09:24:12.805151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.186 [2024-12-09 09:24:12.825052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.186 [2024-12-09 09:24:12.825083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.186 [2024-12-09 09:24:12.839945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.186 [2024-12-09 09:24:12.839976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.186 [2024-12-09 09:24:12.859420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.186 [2024-12-09 09:24:12.859450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.186 [2024-12-09 09:24:12.876921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.186 [2024-12-09 09:24:12.876953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.187 [2024-12-09 09:24:12.894772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.187 [2024-12-09 09:24:12.894803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.448 [2024-12-09 09:24:12.909637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.448 [2024-12-09 09:24:12.909665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.448 [2024-12-09 09:24:12.925805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.448 [2024-12-09 09:24:12.925831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.448 [2024-12-09 09:24:12.936594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.448 [2024-12-09 09:24:12.936622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.448 [2024-12-09 09:24:12.954729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.448 [2024-12-09 09:24:12.954759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.448 [2024-12-09 09:24:12.972908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.448 [2024-12-09 09:24:12.972939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.448 [2024-12-09 09:24:12.991040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.448 [2024-12-09 09:24:12.991069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.448 [2024-12-09 09:24:13.009083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.448 [2024-12-09 09:24:13.009114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.448 [2024-12-09 09:24:13.026514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.448 [2024-12-09 09:24:13.026544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.448 [2024-12-09 09:24:13.044486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.448 [2024-12-09 09:24:13.044515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.448 [2024-12-09 09:24:13.062443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.448 [2024-12-09 09:24:13.062485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.448 [2024-12-09 09:24:13.080127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.448 [2024-12-09 09:24:13.080157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.448 [2024-12-09 09:24:13.094861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.448 [2024-12-09 09:24:13.094890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.448 [2024-12-09 09:24:13.113179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.448 [2024-12-09 09:24:13.113211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.448 [2024-12-09 09:24:13.130347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.448 [2024-12-09 09:24:13.130376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.448 [2024-12-09 09:24:13.148347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.448 [2024-12-09 09:24:13.148375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.448 [2024-12-09 09:24:13.166323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.448 [2024-12-09 09:24:13.166354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.717 [2024-12-09 09:24:13.184030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.717 [2024-12-09 09:24:13.184061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.717 15124.40 IOPS, 118.16 MiB/s [2024-12-09T09:24:13.440Z] [2024-12-09 09:24:13.198279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.717 [2024-12-09 09:24:13.198309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.717 00:13:35.717 Latency(us) 00:13:35.717 [2024-12-09T09:24:13.440Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:35.717 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:13:35.717 Nvme1n1 : 5.01 15126.92 118.18 0.00 0.00 8453.21 3316.28 15370.69 00:13:35.717 [2024-12-09T09:24:13.441Z] =================================================================================================================== 00:13:35.718 [2024-12-09T09:24:13.441Z] Total : 15126.92 118.18 0.00 0.00 8453.21 3316.28 15370.69 00:13:35.718 [2024-12-09 09:24:13.210565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.718 [2024-12-09 09:24:13.210591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.718 [2024-12-09 09:24:13.226554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.718 [2024-12-09 09:24:13.226583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.718 [2024-12-09 09:24:13.242534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.718 [2024-12-09 09:24:13.242561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.718 [2024-12-09 09:24:13.258509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.718 [2024-12-09 09:24:13.258535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.718 [2024-12-09 09:24:13.274498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.718 [2024-12-09 09:24:13.274531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.718 [2024-12-09 09:24:13.290455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.718 [2024-12-09 09:24:13.290489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.718 [2024-12-09 09:24:13.306430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.718 [2024-12-09 09:24:13.306455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.718 [2024-12-09 09:24:13.322405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.718 [2024-12-09 09:24:13.322424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.718 [2024-12-09 09:24:13.338387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.718 [2024-12-09 09:24:13.338412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.718 [2024-12-09 09:24:13.354355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:35.718 [2024-12-09 09:24:13.354372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:35.718 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (65363) - No such process 00:13:35.718 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 65363 00:13:35.718 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.718 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.718 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:35.718 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.718 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:35.718 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.718 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:35.718 delay0 00:13:35.718 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.718 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:13:35.718 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.718 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:35.718 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.718 09:24:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:13:35.977 [2024-12-09 09:24:13.584218] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:42.553 Initializing NVMe Controllers 00:13:42.553 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:13:42.553 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:42.553 Initialization complete. Launching workers. 00:13:42.553 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 100 00:13:42.553 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 387, failed to submit 33 00:13:42.553 success 279, unsuccessful 108, failed 0 00:13:42.553 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:13:42.553 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:13:42.553 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:42.553 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:13:42.553 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:42.553 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:13:42.553 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:42.553 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:42.553 rmmod nvme_tcp 00:13:42.553 rmmod nvme_fabrics 00:13:42.553 rmmod nvme_keyring 00:13:42.553 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:42.553 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:13:42.553 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:13:42.553 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 65207 ']' 00:13:42.553 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 65207 00:13:42.553 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 65207 ']' 00:13:42.553 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 65207 00:13:42.553 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:13:42.553 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:42.553 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65207 00:13:42.553 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:42.553 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:42.553 killing process with pid 65207 00:13:42.553 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65207' 00:13:42.553 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 65207 00:13:42.553 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 65207 00:13:42.553 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:42.553 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:42.553 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:42.553 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:13:42.553 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:42.553 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:13:42.553 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:13:42.553 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:42.553 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:42.553 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:42.553 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:42.553 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:42.554 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:42.554 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:42.554 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:42.554 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:42.554 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:42.554 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:42.554 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:42.554 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:42.554 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:42.554 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:42.554 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:42.554 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.554 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:42.554 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:42.813 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:13:42.813 00:13:42.813 real 0m25.035s 00:13:42.813 user 0m39.373s 00:13:42.813 sys 0m8.367s 00:13:42.813 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:42.813 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:42.813 ************************************ 00:13:42.813 END TEST nvmf_zcopy 00:13:42.813 ************************************ 00:13:42.813 09:24:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:42.813 09:24:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:42.813 09:24:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:42.813 09:24:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:42.813 ************************************ 00:13:42.813 START TEST nvmf_nmic 00:13:42.813 ************************************ 00:13:42.813 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:42.813 * Looking for test storage... 00:13:42.813 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:42.813 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:42.813 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:42.813 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:13:43.073 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:43.073 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:43.073 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:43.073 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:43.073 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:13:43.073 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:43.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.074 --rc genhtml_branch_coverage=1 00:13:43.074 --rc genhtml_function_coverage=1 00:13:43.074 --rc genhtml_legend=1 00:13:43.074 --rc geninfo_all_blocks=1 00:13:43.074 --rc geninfo_unexecuted_blocks=1 00:13:43.074 00:13:43.074 ' 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:43.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.074 --rc genhtml_branch_coverage=1 00:13:43.074 --rc genhtml_function_coverage=1 00:13:43.074 --rc genhtml_legend=1 00:13:43.074 --rc geninfo_all_blocks=1 00:13:43.074 --rc geninfo_unexecuted_blocks=1 00:13:43.074 00:13:43.074 ' 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:43.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.074 --rc genhtml_branch_coverage=1 00:13:43.074 --rc genhtml_function_coverage=1 00:13:43.074 --rc genhtml_legend=1 00:13:43.074 --rc geninfo_all_blocks=1 00:13:43.074 --rc geninfo_unexecuted_blocks=1 00:13:43.074 00:13:43.074 ' 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:43.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.074 --rc genhtml_branch_coverage=1 00:13:43.074 --rc genhtml_function_coverage=1 00:13:43.074 --rc genhtml_legend=1 00:13:43.074 --rc geninfo_all_blocks=1 00:13:43.074 --rc geninfo_unexecuted_blocks=1 00:13:43.074 00:13:43.074 ' 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:43.074 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:43.074 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:43.075 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:43.075 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:43.075 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:43.075 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:43.075 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:43.075 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:43.075 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:43.075 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:43.075 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:43.075 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:43.075 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:43.075 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:43.075 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:43.075 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:43.075 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:43.075 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:43.075 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:43.075 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:43.075 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:43.075 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:43.075 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:43.075 Cannot find device "nvmf_init_br" 00:13:43.075 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:13:43.075 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:43.075 Cannot find device "nvmf_init_br2" 00:13:43.075 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:13:43.075 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:43.075 Cannot find device "nvmf_tgt_br" 00:13:43.075 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:13:43.075 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:43.075 Cannot find device "nvmf_tgt_br2" 00:13:43.075 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:13:43.075 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:43.075 Cannot find device "nvmf_init_br" 00:13:43.075 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:13:43.075 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:43.075 Cannot find device "nvmf_init_br2" 00:13:43.075 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:13:43.075 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:43.075 Cannot find device "nvmf_tgt_br" 00:13:43.075 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:13:43.075 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:43.075 Cannot find device "nvmf_tgt_br2" 00:13:43.075 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:13:43.075 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:43.335 Cannot find device "nvmf_br" 00:13:43.335 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:13:43.335 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:43.335 Cannot find device "nvmf_init_if" 00:13:43.335 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:13:43.335 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:43.335 Cannot find device "nvmf_init_if2" 00:13:43.335 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:13:43.335 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:43.335 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:43.335 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:13:43.335 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:43.335 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:43.335 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:13:43.335 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:43.335 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:43.335 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:43.335 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:43.335 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:43.335 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:43.335 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:43.335 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:43.335 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:43.335 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:43.335 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:43.335 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:43.335 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:43.335 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:43.335 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:43.335 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:43.335 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:43.335 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:43.335 09:24:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:43.335 09:24:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:43.336 09:24:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:43.336 09:24:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:43.336 09:24:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:43.336 09:24:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:43.595 09:24:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:43.595 09:24:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:43.595 09:24:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:43.595 09:24:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:43.595 09:24:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:43.595 09:24:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:43.595 09:24:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:43.595 09:24:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:43.595 09:24:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:43.595 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:43.595 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:13:43.595 00:13:43.595 --- 10.0.0.3 ping statistics --- 00:13:43.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.595 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:13:43.595 09:24:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:43.595 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:43.595 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.084 ms 00:13:43.595 00:13:43.595 --- 10.0.0.4 ping statistics --- 00:13:43.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.595 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:13:43.595 09:24:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:43.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:43.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:13:43.595 00:13:43.595 --- 10.0.0.1 ping statistics --- 00:13:43.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.595 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:13:43.595 09:24:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:43.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:43.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:13:43.595 00:13:43.595 --- 10.0.0.2 ping statistics --- 00:13:43.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.595 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:13:43.595 09:24:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:43.595 09:24:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:13:43.595 09:24:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:43.595 09:24:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:43.595 09:24:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:43.595 09:24:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:43.595 09:24:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:43.595 09:24:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:43.595 09:24:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:43.595 09:24:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:13:43.595 09:24:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:43.595 09:24:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:43.595 09:24:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:43.595 09:24:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=65743 00:13:43.595 09:24:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:43.595 09:24:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 65743 00:13:43.595 09:24:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 65743 ']' 00:13:43.595 09:24:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.595 09:24:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:43.595 09:24:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.595 09:24:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:43.595 09:24:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:43.595 [2024-12-09 09:24:21.248049] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:13:43.595 [2024-12-09 09:24:21.248120] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:43.855 [2024-12-09 09:24:21.403595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:43.855 [2024-12-09 09:24:21.454391] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:43.855 [2024-12-09 09:24:21.454442] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:43.855 [2024-12-09 09:24:21.454452] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:43.855 [2024-12-09 09:24:21.454475] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:43.855 [2024-12-09 09:24:21.454482] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:43.855 [2024-12-09 09:24:21.455353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:43.855 [2024-12-09 09:24:21.455508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:43.855 [2024-12-09 09:24:21.455602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.855 [2024-12-09 09:24:21.455604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:43.855 [2024-12-09 09:24:21.525895] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:44.422 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:44.422 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:13:44.422 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:44.422 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:44.422 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:44.681 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:44.681 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:44.681 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.681 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:44.681 [2024-12-09 09:24:22.173443] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:44.681 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.681 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:44.681 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.681 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:44.681 Malloc0 00:13:44.681 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.681 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:44.681 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.681 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:44.681 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.681 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:44.681 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.681 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:44.682 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.682 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:44.682 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.682 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:44.682 [2024-12-09 09:24:22.261745] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:44.682 test case1: single bdev can't be used in multiple subsystems 00:13:44.682 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.682 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:13:44.682 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:44.682 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.682 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:44.682 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.682 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:13:44.682 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.682 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:44.682 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.682 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:13:44.682 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:13:44.682 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.682 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:44.682 [2024-12-09 09:24:22.297574] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:13:44.682 [2024-12-09 09:24:22.297626] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:13:44.682 [2024-12-09 09:24:22.297638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:44.682 request: 00:13:44.682 { 00:13:44.682 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:44.682 "namespace": { 00:13:44.682 "bdev_name": "Malloc0", 00:13:44.682 "no_auto_visible": false, 00:13:44.682 "hide_metadata": false 00:13:44.682 }, 00:13:44.682 "method": "nvmf_subsystem_add_ns", 00:13:44.682 "req_id": 1 00:13:44.682 } 00:13:44.682 Got JSON-RPC error response 00:13:44.682 response: 00:13:44.682 { 00:13:44.682 "code": -32602, 00:13:44.682 "message": "Invalid parameters" 00:13:44.682 } 00:13:44.682 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:44.682 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:13:44.682 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:13:44.682 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:13:44.682 Adding namespace failed - expected result. 00:13:44.682 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:13:44.682 test case2: host connect to nvmf target in multiple paths 00:13:44.682 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:13:44.682 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.682 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:44.682 [2024-12-09 09:24:22.317791] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:13:44.682 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.682 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid=e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:13:44.940 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid=e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:13:44.940 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:13:44.940 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:13:44.940 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:44.940 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:44.940 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:13:47.483 09:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:47.483 09:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:47.483 09:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:47.483 09:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:47.483 09:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:47.483 09:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:13:47.483 09:24:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:47.483 [global] 00:13:47.483 thread=1 00:13:47.483 invalidate=1 00:13:47.483 rw=write 00:13:47.483 time_based=1 00:13:47.483 runtime=1 00:13:47.483 ioengine=libaio 00:13:47.483 direct=1 00:13:47.483 bs=4096 00:13:47.483 iodepth=1 00:13:47.483 norandommap=0 00:13:47.483 numjobs=1 00:13:47.483 00:13:47.483 verify_dump=1 00:13:47.483 verify_backlog=512 00:13:47.483 verify_state_save=0 00:13:47.483 do_verify=1 00:13:47.483 verify=crc32c-intel 00:13:47.483 [job0] 00:13:47.483 filename=/dev/nvme0n1 00:13:47.483 Could not set queue depth (nvme0n1) 00:13:47.483 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:47.483 fio-3.35 00:13:47.483 Starting 1 thread 00:13:48.417 00:13:48.417 job0: (groupid=0, jobs=1): err= 0: pid=65833: Mon Dec 9 09:24:25 2024 00:13:48.417 read: IOPS=2894, BW=11.3MiB/s (11.9MB/s)(11.3MiB/1001msec) 00:13:48.417 slat (nsec): min=7322, max=70224, avg=9086.33, stdev=3733.77 00:13:48.417 clat (usec): min=110, max=327, avg=196.50, stdev=28.20 00:13:48.417 lat (usec): min=120, max=353, avg=205.59, stdev=28.54 00:13:48.417 clat percentiles (usec): 00:13:48.417 | 1.00th=[ 137], 5.00th=[ 153], 10.00th=[ 165], 20.00th=[ 176], 00:13:48.417 | 30.00th=[ 182], 40.00th=[ 188], 50.00th=[ 194], 60.00th=[ 200], 00:13:48.417 | 70.00th=[ 208], 80.00th=[ 221], 90.00th=[ 237], 95.00th=[ 247], 00:13:48.417 | 99.00th=[ 269], 99.50th=[ 273], 99.90th=[ 302], 99.95th=[ 310], 00:13:48.417 | 99.99th=[ 326] 00:13:48.417 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:13:48.417 slat (usec): min=11, max=107, avg=13.91, stdev= 6.34 00:13:48.417 clat (usec): min=64, max=247, avg=115.76, stdev=19.28 00:13:48.417 lat (usec): min=75, max=354, avg=129.67, stdev=21.06 00:13:48.417 clat percentiles (usec): 00:13:48.417 | 1.00th=[ 80], 5.00th=[ 87], 10.00th=[ 91], 20.00th=[ 98], 00:13:48.417 | 30.00th=[ 104], 40.00th=[ 110], 50.00th=[ 116], 60.00th=[ 122], 00:13:48.417 | 70.00th=[ 127], 80.00th=[ 133], 90.00th=[ 141], 95.00th=[ 149], 00:13:48.417 | 99.00th=[ 165], 99.50th=[ 172], 99.90th=[ 188], 99.95th=[ 196], 00:13:48.417 | 99.99th=[ 247] 00:13:48.417 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:13:48.417 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:13:48.417 lat (usec) : 100=12.05%, 250=86.20%, 500=1.76% 00:13:48.417 cpu : usr=1.80%, sys=5.30%, ctx=5970, majf=0, minf=5 00:13:48.417 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:48.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.417 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.417 issued rwts: total=2897,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:48.417 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:48.417 00:13:48.417 Run status group 0 (all jobs): 00:13:48.417 READ: bw=11.3MiB/s (11.9MB/s), 11.3MiB/s-11.3MiB/s (11.9MB/s-11.9MB/s), io=11.3MiB (11.9MB), run=1001-1001msec 00:13:48.417 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:13:48.417 00:13:48.417 Disk stats (read/write): 00:13:48.417 nvme0n1: ios=2610/2815, merge=0/0, ticks=510/341, in_queue=851, util=91.47% 00:13:48.417 09:24:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:48.675 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:48.675 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:48.675 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:13:48.675 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:48.675 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:48.676 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:48.676 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:48.676 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:13:48.676 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:48.676 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:13:48.676 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:48.676 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:13:48.676 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:48.676 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:13:48.676 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:48.676 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:48.676 rmmod nvme_tcp 00:13:48.676 rmmod nvme_fabrics 00:13:48.676 rmmod nvme_keyring 00:13:48.676 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:48.676 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:13:48.676 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:13:48.676 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 65743 ']' 00:13:48.676 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 65743 00:13:48.676 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 65743 ']' 00:13:48.676 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 65743 00:13:48.676 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:13:48.676 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:48.676 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65743 00:13:48.676 killing process with pid 65743 00:13:48.676 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:48.676 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:48.676 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65743' 00:13:48.676 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 65743 00:13:48.676 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 65743 00:13:48.935 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:48.935 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:48.935 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:48.935 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:13:48.935 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:13:48.935 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:48.935 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:13:48.935 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:48.935 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:48.935 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:48.935 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:48.935 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:48.935 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:48.935 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:48.935 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:48.935 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:49.192 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:49.192 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:49.192 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:49.192 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:49.192 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:49.192 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:49.192 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:49.192 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:49.192 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:49.192 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:49.192 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:13:49.192 00:13:49.192 real 0m6.461s 00:13:49.192 user 0m19.481s 00:13:49.192 sys 0m2.401s 00:13:49.192 ************************************ 00:13:49.192 END TEST nvmf_nmic 00:13:49.192 ************************************ 00:13:49.192 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:49.192 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:49.192 09:24:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:49.192 09:24:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:49.192 09:24:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:49.192 09:24:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:49.192 ************************************ 00:13:49.192 START TEST nvmf_fio_target 00:13:49.192 ************************************ 00:13:49.192 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:49.452 * Looking for test storage... 00:13:49.452 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:49.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.452 --rc genhtml_branch_coverage=1 00:13:49.452 --rc genhtml_function_coverage=1 00:13:49.452 --rc genhtml_legend=1 00:13:49.452 --rc geninfo_all_blocks=1 00:13:49.452 --rc geninfo_unexecuted_blocks=1 00:13:49.452 00:13:49.452 ' 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:49.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.452 --rc genhtml_branch_coverage=1 00:13:49.452 --rc genhtml_function_coverage=1 00:13:49.452 --rc genhtml_legend=1 00:13:49.452 --rc geninfo_all_blocks=1 00:13:49.452 --rc geninfo_unexecuted_blocks=1 00:13:49.452 00:13:49.452 ' 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:49.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.452 --rc genhtml_branch_coverage=1 00:13:49.452 --rc genhtml_function_coverage=1 00:13:49.452 --rc genhtml_legend=1 00:13:49.452 --rc geninfo_all_blocks=1 00:13:49.452 --rc geninfo_unexecuted_blocks=1 00:13:49.452 00:13:49.452 ' 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:49.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.452 --rc genhtml_branch_coverage=1 00:13:49.452 --rc genhtml_function_coverage=1 00:13:49.452 --rc genhtml_legend=1 00:13:49.452 --rc geninfo_all_blocks=1 00:13:49.452 --rc geninfo_unexecuted_blocks=1 00:13:49.452 00:13:49.452 ' 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:49.452 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:49.452 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:49.453 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:49.453 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:49.453 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:49.453 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:13:49.453 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:49.453 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:49.453 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:49.453 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:49.453 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:49.453 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:49.453 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:49.453 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:49.453 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:49.712 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:49.712 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:49.712 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:49.712 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:49.712 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:49.712 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:49.712 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:49.712 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:49.712 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:49.712 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:49.712 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:49.712 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:49.712 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:49.712 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:49.712 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:49.712 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:49.712 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:49.712 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:49.712 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:49.712 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:49.712 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:49.712 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:49.712 Cannot find device "nvmf_init_br" 00:13:49.712 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:13:49.712 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:49.712 Cannot find device "nvmf_init_br2" 00:13:49.712 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:13:49.712 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:49.712 Cannot find device "nvmf_tgt_br" 00:13:49.712 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:13:49.712 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:49.712 Cannot find device "nvmf_tgt_br2" 00:13:49.712 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:13:49.712 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:49.712 Cannot find device "nvmf_init_br" 00:13:49.712 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:13:49.712 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:49.712 Cannot find device "nvmf_init_br2" 00:13:49.712 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:13:49.712 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:49.712 Cannot find device "nvmf_tgt_br" 00:13:49.712 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:13:49.712 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:49.712 Cannot find device "nvmf_tgt_br2" 00:13:49.712 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:13:49.712 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:49.712 Cannot find device "nvmf_br" 00:13:49.712 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:13:49.712 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:49.712 Cannot find device "nvmf_init_if" 00:13:49.712 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:13:49.712 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:49.712 Cannot find device "nvmf_init_if2" 00:13:49.712 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:13:49.712 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:49.712 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:49.712 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:13:49.712 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:49.712 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:49.712 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:13:49.712 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:49.712 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:49.712 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:49.712 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:49.712 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:49.712 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:49.973 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:49.973 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:49.973 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:49.973 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:49.973 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:49.973 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:49.973 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:49.973 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:49.973 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:49.973 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:49.973 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:49.973 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:49.973 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:49.973 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:49.973 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:49.973 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:49.973 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:49.973 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:49.973 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:49.973 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:49.973 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:49.973 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:49.973 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:49.973 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:49.973 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:49.973 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:49.973 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:49.973 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:49.973 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:13:49.973 00:13:49.973 --- 10.0.0.3 ping statistics --- 00:13:49.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.973 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:13:49.973 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:49.973 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:49.973 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:13:49.973 00:13:49.973 --- 10.0.0.4 ping statistics --- 00:13:49.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.973 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:13:49.973 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:49.973 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:49.973 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:13:49.973 00:13:49.973 --- 10.0.0.1 ping statistics --- 00:13:49.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.973 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:13:49.973 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:49.973 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:49.973 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:13:49.973 00:13:49.973 --- 10.0.0.2 ping statistics --- 00:13:49.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.973 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:13:49.973 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:49.973 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:13:49.973 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:49.973 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:49.973 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:49.973 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:49.973 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:49.973 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:49.973 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:49.973 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:13:49.973 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:49.973 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:49.973 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.973 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=66068 00:13:49.973 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:49.973 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 66068 00:13:49.973 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 66068 ']' 00:13:49.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.973 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.973 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:49.973 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.973 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:49.973 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.235 [2024-12-09 09:24:27.719103] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:13:50.236 [2024-12-09 09:24:27.719351] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:50.236 [2024-12-09 09:24:27.873046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:50.236 [2024-12-09 09:24:27.923086] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:50.236 [2024-12-09 09:24:27.923141] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:50.236 [2024-12-09 09:24:27.923151] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:50.236 [2024-12-09 09:24:27.923160] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:50.236 [2024-12-09 09:24:27.923167] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:50.236 [2024-12-09 09:24:27.924074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:50.236 [2024-12-09 09:24:27.924145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:50.236 [2024-12-09 09:24:27.924310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:50.236 [2024-12-09 09:24:27.924315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.500 [2024-12-09 09:24:27.967662] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:51.066 09:24:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:51.066 09:24:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:13:51.066 09:24:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:51.066 09:24:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:51.066 09:24:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.066 09:24:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:51.066 09:24:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:51.323 [2024-12-09 09:24:28.839237] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:51.323 09:24:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:51.581 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:13:51.581 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:51.839 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:13:51.839 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:52.096 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:13:52.096 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:52.355 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:13:52.355 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:13:52.613 09:24:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:52.872 09:24:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:13:52.872 09:24:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:53.131 09:24:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:13:53.131 09:24:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:53.389 09:24:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:13:53.389 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:13:53.958 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:53.958 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:53.958 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:54.216 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:54.216 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:54.474 09:24:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:54.733 [2024-12-09 09:24:32.286566] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:54.733 09:24:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:13:54.992 09:24:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:13:55.251 09:24:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid=e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:13:55.251 09:24:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:13:55.251 09:24:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:13:55.251 09:24:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:55.251 09:24:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:13:55.251 09:24:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:13:55.251 09:24:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:13:57.785 09:24:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:57.785 09:24:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:57.785 09:24:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:57.785 09:24:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:13:57.785 09:24:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:57.785 09:24:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:13:57.785 09:24:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:57.785 [global] 00:13:57.785 thread=1 00:13:57.785 invalidate=1 00:13:57.785 rw=write 00:13:57.785 time_based=1 00:13:57.785 runtime=1 00:13:57.785 ioengine=libaio 00:13:57.785 direct=1 00:13:57.785 bs=4096 00:13:57.785 iodepth=1 00:13:57.785 norandommap=0 00:13:57.785 numjobs=1 00:13:57.785 00:13:57.785 verify_dump=1 00:13:57.785 verify_backlog=512 00:13:57.785 verify_state_save=0 00:13:57.785 do_verify=1 00:13:57.785 verify=crc32c-intel 00:13:57.785 [job0] 00:13:57.785 filename=/dev/nvme0n1 00:13:57.785 [job1] 00:13:57.785 filename=/dev/nvme0n2 00:13:57.785 [job2] 00:13:57.785 filename=/dev/nvme0n3 00:13:57.785 [job3] 00:13:57.785 filename=/dev/nvme0n4 00:13:57.785 Could not set queue depth (nvme0n1) 00:13:57.785 Could not set queue depth (nvme0n2) 00:13:57.785 Could not set queue depth (nvme0n3) 00:13:57.786 Could not set queue depth (nvme0n4) 00:13:57.786 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:57.786 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:57.786 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:57.786 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:57.786 fio-3.35 00:13:57.786 Starting 4 threads 00:13:58.720 00:13:58.720 job0: (groupid=0, jobs=1): err= 0: pid=66253: Mon Dec 9 09:24:36 2024 00:13:58.720 read: IOPS=2792, BW=10.9MiB/s (11.4MB/s)(10.9MiB/1001msec) 00:13:58.720 slat (nsec): min=6050, max=25089, avg=7628.12, stdev=1342.99 00:13:58.720 clat (usec): min=119, max=2059, avg=187.10, stdev=53.20 00:13:58.720 lat (usec): min=127, max=2068, avg=194.72, stdev=52.81 00:13:58.720 clat percentiles (usec): 00:13:58.720 | 1.00th=[ 126], 5.00th=[ 133], 10.00th=[ 137], 20.00th=[ 143], 00:13:58.720 | 30.00th=[ 149], 40.00th=[ 184], 50.00th=[ 198], 60.00th=[ 206], 00:13:58.720 | 70.00th=[ 212], 80.00th=[ 219], 90.00th=[ 229], 95.00th=[ 239], 00:13:58.720 | 99.00th=[ 293], 99.50th=[ 302], 99.90th=[ 326], 99.95th=[ 437], 00:13:58.720 | 99.99th=[ 2057] 00:13:58.720 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:13:58.720 slat (nsec): min=8283, max=97529, avg=13685.87, stdev=5154.35 00:13:58.720 clat (usec): min=77, max=222, avg=133.03, stdev=32.81 00:13:58.720 lat (usec): min=90, max=252, avg=146.72, stdev=32.56 00:13:58.720 clat percentiles (usec): 00:13:58.720 | 1.00th=[ 84], 5.00th=[ 89], 10.00th=[ 92], 20.00th=[ 98], 00:13:58.720 | 30.00th=[ 105], 40.00th=[ 114], 50.00th=[ 131], 60.00th=[ 153], 00:13:58.720 | 70.00th=[ 159], 80.00th=[ 167], 90.00th=[ 176], 95.00th=[ 182], 00:13:58.720 | 99.00th=[ 192], 99.50th=[ 198], 99.90th=[ 210], 99.95th=[ 221], 00:13:58.720 | 99.99th=[ 223] 00:13:58.720 bw ( KiB/s): min=14568, max=14568, per=30.96%, avg=14568.00, stdev= 0.00, samples=1 00:13:58.720 iops : min= 3642, max= 3642, avg=3642.00, stdev= 0.00, samples=1 00:13:58.720 lat (usec) : 100=12.22%, 250=86.26%, 500=1.50% 00:13:58.720 lat (msec) : 4=0.02% 00:13:58.720 cpu : usr=1.60%, sys=5.20%, ctx=5867, majf=0, minf=9 00:13:58.720 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:58.720 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.720 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.720 issued rwts: total=2795,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:58.720 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:58.720 job1: (groupid=0, jobs=1): err= 0: pid=66254: Mon Dec 9 09:24:36 2024 00:13:58.720 read: IOPS=2303, BW=9215KiB/s (9436kB/s)(9224KiB/1001msec) 00:13:58.720 slat (nsec): min=7383, max=28465, avg=8382.26, stdev=1071.68 00:13:58.720 clat (usec): min=159, max=522, avg=220.46, stdev=25.95 00:13:58.720 lat (usec): min=167, max=531, avg=228.84, stdev=25.96 00:13:58.720 clat percentiles (usec): 00:13:58.720 | 1.00th=[ 182], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 200], 00:13:58.720 | 30.00th=[ 206], 40.00th=[ 210], 50.00th=[ 217], 60.00th=[ 223], 00:13:58.720 | 70.00th=[ 229], 80.00th=[ 237], 90.00th=[ 251], 95.00th=[ 265], 00:13:58.720 | 99.00th=[ 302], 99.50th=[ 322], 99.90th=[ 416], 99.95th=[ 420], 00:13:58.720 | 99.99th=[ 523] 00:13:58.720 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:13:58.720 slat (nsec): min=7714, max=73692, avg=12778.30, stdev=4160.91 00:13:58.720 clat (usec): min=111, max=467, avg=169.96, stdev=17.60 00:13:58.720 lat (usec): min=124, max=477, avg=182.73, stdev=19.03 00:13:58.720 clat percentiles (usec): 00:13:58.720 | 1.00th=[ 120], 5.00th=[ 145], 10.00th=[ 151], 20.00th=[ 157], 00:13:58.720 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 174], 00:13:58.720 | 70.00th=[ 178], 80.00th=[ 184], 90.00th=[ 190], 95.00th=[ 196], 00:13:58.720 | 99.00th=[ 210], 99.50th=[ 219], 99.90th=[ 235], 99.95th=[ 237], 00:13:58.720 | 99.99th=[ 469] 00:13:58.720 bw ( KiB/s): min=10856, max=10856, per=23.07%, avg=10856.00, stdev= 0.00, samples=1 00:13:58.720 iops : min= 2714, max= 2714, avg=2714.00, stdev= 0.00, samples=1 00:13:58.720 lat (usec) : 250=95.15%, 500=4.83%, 750=0.02% 00:13:58.720 cpu : usr=1.10%, sys=4.70%, ctx=4867, majf=0, minf=7 00:13:58.720 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:58.720 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.720 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.720 issued rwts: total=2306,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:58.720 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:58.720 job2: (groupid=0, jobs=1): err= 0: pid=66255: Mon Dec 9 09:24:36 2024 00:13:58.720 read: IOPS=3039, BW=11.9MiB/s (12.5MB/s)(11.9MiB/1001msec) 00:13:58.721 slat (nsec): min=7898, max=33969, avg=9979.18, stdev=3496.20 00:13:58.721 clat (usec): min=131, max=3597, avg=170.09, stdev=71.51 00:13:58.721 lat (usec): min=140, max=3605, avg=180.06, stdev=72.24 00:13:58.721 clat percentiles (usec): 00:13:58.721 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 151], 00:13:58.721 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 163], 00:13:58.721 | 70.00th=[ 169], 80.00th=[ 182], 90.00th=[ 200], 95.00th=[ 212], 00:13:58.721 | 99.00th=[ 359], 99.50th=[ 449], 99.90th=[ 553], 99.95th=[ 627], 00:13:58.721 | 99.99th=[ 3589] 00:13:58.721 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:13:58.721 slat (nsec): min=10907, max=52140, avg=14977.35, stdev=4629.42 00:13:58.721 clat (usec): min=90, max=3944, avg=130.06, stdev=147.91 00:13:58.721 lat (usec): min=102, max=3962, avg=145.04, stdev=148.95 00:13:58.721 clat percentiles (usec): 00:13:58.721 | 1.00th=[ 96], 5.00th=[ 101], 10.00th=[ 103], 20.00th=[ 106], 00:13:58.721 | 30.00th=[ 110], 40.00th=[ 113], 50.00th=[ 116], 60.00th=[ 121], 00:13:58.721 | 70.00th=[ 130], 80.00th=[ 143], 90.00th=[ 155], 95.00th=[ 165], 00:13:58.721 | 99.00th=[ 247], 99.50th=[ 277], 99.90th=[ 3818], 99.95th=[ 3851], 00:13:58.721 | 99.99th=[ 3949] 00:13:58.721 bw ( KiB/s): min=12288, max=12288, per=26.11%, avg=12288.00, stdev= 0.00, samples=1 00:13:58.721 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:13:58.721 lat (usec) : 100=2.06%, 250=96.97%, 500=0.75%, 750=0.11% 00:13:58.721 lat (msec) : 4=0.10% 00:13:58.721 cpu : usr=1.50%, sys=6.60%, ctx=6115, majf=0, minf=9 00:13:58.721 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:58.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.721 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.721 issued rwts: total=3043,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:58.721 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:58.721 job3: (groupid=0, jobs=1): err= 0: pid=66256: Mon Dec 9 09:24:36 2024 00:13:58.721 read: IOPS=2953, BW=11.5MiB/s (12.1MB/s)(11.5MiB/1001msec) 00:13:58.721 slat (nsec): min=7614, max=26087, avg=8539.03, stdev=1289.55 00:13:58.721 clat (usec): min=130, max=1608, avg=174.75, stdev=48.78 00:13:58.721 lat (usec): min=138, max=1616, avg=183.29, stdev=48.70 00:13:58.721 clat percentiles (usec): 00:13:58.721 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 149], 00:13:58.721 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 163], 00:13:58.721 | 70.00th=[ 169], 80.00th=[ 217], 90.00th=[ 241], 95.00th=[ 249], 00:13:58.721 | 99.00th=[ 273], 99.50th=[ 306], 99.90th=[ 627], 99.95th=[ 938], 00:13:58.721 | 99.99th=[ 1614] 00:13:58.721 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:13:58.721 slat (nsec): min=8090, max=72684, avg=13407.35, stdev=3677.83 00:13:58.721 clat (usec): min=84, max=516, avg=133.97, stdev=34.79 00:13:58.721 lat (usec): min=103, max=528, avg=147.37, stdev=35.87 00:13:58.721 clat percentiles (usec): 00:13:58.721 | 1.00th=[ 96], 5.00th=[ 100], 10.00th=[ 103], 20.00th=[ 106], 00:13:58.721 | 30.00th=[ 110], 40.00th=[ 113], 50.00th=[ 117], 60.00th=[ 124], 00:13:58.721 | 70.00th=[ 161], 80.00th=[ 176], 90.00th=[ 186], 95.00th=[ 194], 00:13:58.721 | 99.00th=[ 208], 99.50th=[ 217], 99.90th=[ 243], 99.95th=[ 318], 00:13:58.721 | 99.99th=[ 519] 00:13:58.721 bw ( KiB/s): min=12288, max=12288, per=26.11%, avg=12288.00, stdev= 0.00, samples=1 00:13:58.721 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:13:58.721 lat (usec) : 100=2.52%, 250=95.02%, 500=2.39%, 750=0.03%, 1000=0.02% 00:13:58.721 lat (msec) : 2=0.02% 00:13:58.721 cpu : usr=1.60%, sys=5.40%, ctx=6033, majf=0, minf=21 00:13:58.721 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:58.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.721 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.721 issued rwts: total=2956,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:58.721 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:58.721 00:13:58.721 Run status group 0 (all jobs): 00:13:58.721 READ: bw=43.3MiB/s (45.4MB/s), 9215KiB/s-11.9MiB/s (9436kB/s-12.5MB/s), io=43.4MiB (45.5MB), run=1001-1001msec 00:13:58.721 WRITE: bw=46.0MiB/s (48.2MB/s), 9.99MiB/s-12.0MiB/s (10.5MB/s-12.6MB/s), io=46.0MiB (48.2MB), run=1001-1001msec 00:13:58.721 00:13:58.721 Disk stats (read/write): 00:13:58.721 nvme0n1: ios=2610/2606, merge=0/0, ticks=517/344, in_queue=861, util=90.07% 00:13:58.721 nvme0n2: ios=2090/2143, merge=0/0, ticks=474/346, in_queue=820, util=88.56% 00:13:58.721 nvme0n3: ios=2596/2669, merge=0/0, ticks=495/356, in_queue=851, util=89.50% 00:13:58.721 nvme0n4: ios=2537/2560, merge=0/0, ticks=447/344, in_queue=791, util=89.95% 00:13:58.721 09:24:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:13:58.721 [global] 00:13:58.721 thread=1 00:13:58.721 invalidate=1 00:13:58.721 rw=randwrite 00:13:58.721 time_based=1 00:13:58.721 runtime=1 00:13:58.721 ioengine=libaio 00:13:58.721 direct=1 00:13:58.721 bs=4096 00:13:58.721 iodepth=1 00:13:58.721 norandommap=0 00:13:58.721 numjobs=1 00:13:58.721 00:13:58.721 verify_dump=1 00:13:58.721 verify_backlog=512 00:13:58.721 verify_state_save=0 00:13:58.721 do_verify=1 00:13:58.721 verify=crc32c-intel 00:13:58.721 [job0] 00:13:58.721 filename=/dev/nvme0n1 00:13:58.721 [job1] 00:13:58.721 filename=/dev/nvme0n2 00:13:58.721 [job2] 00:13:58.721 filename=/dev/nvme0n3 00:13:58.721 [job3] 00:13:58.721 filename=/dev/nvme0n4 00:13:58.980 Could not set queue depth (nvme0n1) 00:13:58.980 Could not set queue depth (nvme0n2) 00:13:58.980 Could not set queue depth (nvme0n3) 00:13:58.980 Could not set queue depth (nvme0n4) 00:13:58.980 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:58.980 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:58.980 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:58.980 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:58.980 fio-3.35 00:13:58.980 Starting 4 threads 00:14:00.354 00:14:00.354 job0: (groupid=0, jobs=1): err= 0: pid=66310: Mon Dec 9 09:24:37 2024 00:14:00.354 read: IOPS=3309, BW=12.9MiB/s (13.6MB/s)(12.9MiB/1001msec) 00:14:00.354 slat (nsec): min=7805, max=27331, avg=8426.95, stdev=1050.16 00:14:00.354 clat (usec): min=123, max=286, avg=157.01, stdev=20.67 00:14:00.354 lat (usec): min=131, max=308, avg=165.43, stdev=20.75 00:14:00.354 clat percentiles (usec): 00:14:00.354 | 1.00th=[ 131], 5.00th=[ 137], 10.00th=[ 139], 20.00th=[ 143], 00:14:00.354 | 30.00th=[ 145], 40.00th=[ 149], 50.00th=[ 151], 60.00th=[ 155], 00:14:00.354 | 70.00th=[ 159], 80.00th=[ 169], 90.00th=[ 190], 95.00th=[ 202], 00:14:00.354 | 99.00th=[ 225], 99.50th=[ 235], 99.90th=[ 253], 99.95th=[ 281], 00:14:00.354 | 99.99th=[ 289] 00:14:00.354 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:14:00.354 slat (usec): min=11, max=107, avg=14.09, stdev= 6.30 00:14:00.354 clat (usec): min=73, max=635, avg=110.16, stdev=21.12 00:14:00.354 lat (usec): min=88, max=652, avg=124.25, stdev=22.73 00:14:00.354 clat percentiles (usec): 00:14:00.354 | 1.00th=[ 86], 5.00th=[ 91], 10.00th=[ 93], 20.00th=[ 96], 00:14:00.354 | 30.00th=[ 99], 40.00th=[ 102], 50.00th=[ 105], 60.00th=[ 110], 00:14:00.354 | 70.00th=[ 115], 80.00th=[ 123], 90.00th=[ 137], 95.00th=[ 145], 00:14:00.354 | 99.00th=[ 161], 99.50th=[ 167], 99.90th=[ 343], 99.95th=[ 449], 00:14:00.354 | 99.99th=[ 635] 00:14:00.354 bw ( KiB/s): min=15352, max=15352, per=32.31%, avg=15352.00, stdev= 0.00, samples=1 00:14:00.354 iops : min= 3838, max= 3838, avg=3838.00, stdev= 0.00, samples=1 00:14:00.354 lat (usec) : 100=17.62%, 250=82.24%, 500=0.13%, 750=0.01% 00:14:00.354 cpu : usr=1.60%, sys=6.50%, ctx=6898, majf=0, minf=7 00:14:00.354 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:00.354 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:00.354 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:00.354 issued rwts: total=3313,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:00.354 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:00.354 job1: (groupid=0, jobs=1): err= 0: pid=66313: Mon Dec 9 09:24:37 2024 00:14:00.354 read: IOPS=2171, BW=8687KiB/s (8896kB/s)(8696KiB/1001msec) 00:14:00.354 slat (nsec): min=6143, max=29652, avg=8642.89, stdev=2104.44 00:14:00.354 clat (usec): min=171, max=1198, avg=227.75, stdev=30.42 00:14:00.354 lat (usec): min=179, max=1207, avg=236.40, stdev=30.52 00:14:00.354 clat percentiles (usec): 00:14:00.354 | 1.00th=[ 196], 5.00th=[ 202], 10.00th=[ 208], 20.00th=[ 212], 00:14:00.354 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 225], 60.00th=[ 229], 00:14:00.354 | 70.00th=[ 233], 80.00th=[ 237], 90.00th=[ 245], 95.00th=[ 255], 00:14:00.354 | 99.00th=[ 318], 99.50th=[ 330], 99.90th=[ 347], 99.95th=[ 635], 00:14:00.354 | 99.99th=[ 1205] 00:14:00.354 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:14:00.354 slat (usec): min=7, max=423, avg=12.78, stdev= 9.75 00:14:00.354 clat (usec): min=2, max=2334, avg=175.41, stdev=48.46 00:14:00.354 lat (usec): min=96, max=2344, avg=188.20, stdev=49.38 00:14:00.354 clat percentiles (usec): 00:14:00.354 | 1.00th=[ 113], 5.00th=[ 129], 10.00th=[ 153], 20.00th=[ 163], 00:14:00.354 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 180], 00:14:00.354 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 198], 95.00th=[ 206], 00:14:00.354 | 99.00th=[ 233], 99.50th=[ 247], 99.90th=[ 334], 99.95th=[ 553], 00:14:00.354 | 99.99th=[ 2343] 00:14:00.354 bw ( KiB/s): min= 9696, max=10805, per=21.57%, avg=10250.50, stdev=784.18, samples=2 00:14:00.354 iops : min= 2424, max= 2701, avg=2562.50, stdev=195.87, samples=2 00:14:00.354 lat (usec) : 4=0.02%, 100=0.06%, 250=96.70%, 500=3.13%, 750=0.04% 00:14:00.354 lat (msec) : 2=0.02%, 4=0.02% 00:14:00.354 cpu : usr=1.30%, sys=4.50%, ctx=4736, majf=0, minf=15 00:14:00.354 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:00.354 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:00.354 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:00.354 issued rwts: total=2174,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:00.354 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:00.354 job2: (groupid=0, jobs=1): err= 0: pid=66317: Mon Dec 9 09:24:37 2024 00:14:00.354 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:14:00.354 slat (usec): min=7, max=225, avg= 8.43, stdev= 4.09 00:14:00.354 clat (usec): min=127, max=682, avg=171.61, stdev=45.99 00:14:00.354 lat (usec): min=135, max=690, avg=180.04, stdev=46.65 00:14:00.354 clat percentiles (usec): 00:14:00.354 | 1.00th=[ 137], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 149], 00:14:00.355 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 159], 00:14:00.355 | 70.00th=[ 165], 80.00th=[ 176], 90.00th=[ 223], 95.00th=[ 289], 00:14:00.355 | 99.00th=[ 343], 99.50th=[ 408], 99.90th=[ 449], 99.95th=[ 676], 00:14:00.355 | 99.99th=[ 685] 00:14:00.355 write: IOPS=3184, BW=12.4MiB/s (13.0MB/s)(12.5MiB/1001msec); 0 zone resets 00:14:00.355 slat (usec): min=7, max=100, avg=13.72, stdev= 5.39 00:14:00.355 clat (usec): min=88, max=6164, avg=124.56, stdev=178.27 00:14:00.355 lat (usec): min=101, max=6183, avg=138.28, stdev=178.97 00:14:00.355 clat percentiles (usec): 00:14:00.355 | 1.00th=[ 94], 5.00th=[ 98], 10.00th=[ 100], 20.00th=[ 103], 00:14:00.355 | 30.00th=[ 106], 40.00th=[ 109], 50.00th=[ 112], 60.00th=[ 116], 00:14:00.355 | 70.00th=[ 120], 80.00th=[ 126], 90.00th=[ 141], 95.00th=[ 159], 00:14:00.355 | 99.00th=[ 184], 99.50th=[ 215], 99.90th=[ 3720], 99.95th=[ 3818], 00:14:00.355 | 99.99th=[ 6194] 00:14:00.355 bw ( KiB/s): min=12288, max=12288, per=25.86%, avg=12288.00, stdev= 0.00, samples=1 00:14:00.355 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:14:00.355 lat (usec) : 100=5.06%, 250=91.69%, 500=3.08%, 750=0.03%, 1000=0.02% 00:14:00.355 lat (msec) : 2=0.02%, 4=0.08%, 10=0.02% 00:14:00.355 cpu : usr=1.30%, sys=6.10%, ctx=6261, majf=0, minf=16 00:14:00.355 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:00.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:00.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:00.355 issued rwts: total=3072,3188,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:00.355 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:00.355 job3: (groupid=0, jobs=1): err= 0: pid=66318: Mon Dec 9 09:24:37 2024 00:14:00.355 read: IOPS=2322, BW=9291KiB/s (9514kB/s)(9300KiB/1001msec) 00:14:00.355 slat (nsec): min=6238, max=34266, avg=8360.42, stdev=2418.05 00:14:00.355 clat (usec): min=122, max=1201, avg=217.58, stdev=33.25 00:14:00.355 lat (usec): min=131, max=1209, avg=225.94, stdev=32.76 00:14:00.355 clat percentiles (usec): 00:14:00.355 | 1.00th=[ 143], 5.00th=[ 153], 10.00th=[ 172], 20.00th=[ 208], 00:14:00.355 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 223], 60.00th=[ 227], 00:14:00.355 | 70.00th=[ 231], 80.00th=[ 235], 90.00th=[ 243], 95.00th=[ 249], 00:14:00.355 | 99.00th=[ 262], 99.50th=[ 277], 99.90th=[ 310], 99.95th=[ 310], 00:14:00.355 | 99.99th=[ 1205] 00:14:00.355 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:14:00.355 slat (nsec): min=8105, max=68714, avg=14899.54, stdev=6088.32 00:14:00.355 clat (usec): min=89, max=2406, avg=168.66, stdev=53.43 00:14:00.355 lat (usec): min=102, max=2418, avg=183.56, stdev=54.16 00:14:00.355 clat percentiles (usec): 00:14:00.355 | 1.00th=[ 100], 5.00th=[ 109], 10.00th=[ 120], 20.00th=[ 155], 00:14:00.355 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 176], 00:14:00.355 | 70.00th=[ 182], 80.00th=[ 186], 90.00th=[ 194], 95.00th=[ 202], 00:14:00.355 | 99.00th=[ 219], 99.50th=[ 233], 99.90th=[ 461], 99.95th=[ 807], 00:14:00.355 | 99.99th=[ 2409] 00:14:00.355 bw ( KiB/s): min= 9712, max=10789, per=21.57%, avg=10250.50, stdev=761.55, samples=2 00:14:00.355 iops : min= 2428, max= 2697, avg=2562.50, stdev=190.21, samples=2 00:14:00.355 lat (usec) : 100=0.70%, 250=97.20%, 500=2.05%, 1000=0.02% 00:14:00.355 lat (msec) : 2=0.02%, 4=0.02% 00:14:00.355 cpu : usr=0.90%, sys=5.60%, ctx=4885, majf=0, minf=13 00:14:00.355 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:00.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:00.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:00.355 issued rwts: total=2325,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:00.355 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:00.355 00:14:00.355 Run status group 0 (all jobs): 00:14:00.355 READ: bw=42.5MiB/s (44.5MB/s), 8687KiB/s-12.9MiB/s (8896kB/s-13.6MB/s), io=42.5MiB (44.6MB), run=1001-1001msec 00:14:00.355 WRITE: bw=46.4MiB/s (48.7MB/s), 9.99MiB/s-14.0MiB/s (10.5MB/s-14.7MB/s), io=46.5MiB (48.7MB), run=1001-1001msec 00:14:00.355 00:14:00.355 Disk stats (read/write): 00:14:00.355 nvme0n1: ios=2961/3072, merge=0/0, ticks=469/347, in_queue=816, util=88.38% 00:14:00.355 nvme0n2: ios=2079/2048, merge=0/0, ticks=473/335, in_queue=808, util=89.40% 00:14:00.355 nvme0n3: ios=2649/3072, merge=0/0, ticks=462/376, in_queue=838, util=89.85% 00:14:00.355 nvme0n4: ios=2035/2048, merge=0/0, ticks=447/363, in_queue=810, util=90.00% 00:14:00.355 09:24:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:14:00.355 [global] 00:14:00.355 thread=1 00:14:00.355 invalidate=1 00:14:00.355 rw=write 00:14:00.355 time_based=1 00:14:00.355 runtime=1 00:14:00.355 ioengine=libaio 00:14:00.355 direct=1 00:14:00.355 bs=4096 00:14:00.355 iodepth=128 00:14:00.355 norandommap=0 00:14:00.355 numjobs=1 00:14:00.355 00:14:00.355 verify_dump=1 00:14:00.355 verify_backlog=512 00:14:00.355 verify_state_save=0 00:14:00.355 do_verify=1 00:14:00.355 verify=crc32c-intel 00:14:00.355 [job0] 00:14:00.355 filename=/dev/nvme0n1 00:14:00.355 [job1] 00:14:00.355 filename=/dev/nvme0n2 00:14:00.355 [job2] 00:14:00.355 filename=/dev/nvme0n3 00:14:00.355 [job3] 00:14:00.355 filename=/dev/nvme0n4 00:14:00.355 Could not set queue depth (nvme0n1) 00:14:00.355 Could not set queue depth (nvme0n2) 00:14:00.355 Could not set queue depth (nvme0n3) 00:14:00.355 Could not set queue depth (nvme0n4) 00:14:00.355 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:00.355 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:00.355 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:00.355 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:00.355 fio-3.35 00:14:00.355 Starting 4 threads 00:14:01.728 00:14:01.728 job0: (groupid=0, jobs=1): err= 0: pid=66378: Mon Dec 9 09:24:39 2024 00:14:01.728 read: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec) 00:14:01.728 slat (usec): min=5, max=5412, avg=85.00, stdev=332.93 00:14:01.728 clat (usec): min=8110, max=20427, avg=11290.57, stdev=917.89 00:14:01.728 lat (usec): min=8128, max=22946, avg=11375.57, stdev=881.95 00:14:01.728 clat percentiles (usec): 00:14:01.728 | 1.00th=[ 8848], 5.00th=[10159], 10.00th=[10552], 20.00th=[10814], 00:14:01.728 | 30.00th=[11076], 40.00th=[11076], 50.00th=[11338], 60.00th=[11338], 00:14:01.728 | 70.00th=[11469], 80.00th=[11731], 90.00th=[11994], 95.00th=[12256], 00:14:01.728 | 99.00th=[16057], 99.50th=[17957], 99.90th=[17957], 99.95th=[17957], 00:14:01.728 | 99.99th=[20317] 00:14:01.728 write: IOPS=5694, BW=22.2MiB/s (23.3MB/s)(22.3MiB/1001msec); 0 zone resets 00:14:01.728 slat (usec): min=7, max=6917, avg=81.51, stdev=296.34 00:14:01.728 clat (usec): min=185, max=25331, avg=11034.77, stdev=2029.80 00:14:01.728 lat (usec): min=2113, max=27355, avg=11116.28, stdev=2018.66 00:14:01.728 clat percentiles (usec): 00:14:01.728 | 1.00th=[ 5473], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[10421], 00:14:01.728 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10814], 60.00th=[10945], 00:14:01.728 | 70.00th=[11076], 80.00th=[11207], 90.00th=[11600], 95.00th=[12649], 00:14:01.728 | 99.00th=[22676], 99.50th=[24773], 99.90th=[25297], 99.95th=[25297], 00:14:01.728 | 99.99th=[25297] 00:14:01.728 bw ( KiB/s): min=24576, max=24576, per=36.35%, avg=24576.00, stdev= 0.00, samples=1 00:14:01.728 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=1 00:14:01.728 lat (usec) : 250=0.01% 00:14:01.728 lat (msec) : 4=0.28%, 10=4.81%, 20=93.78%, 50=1.12% 00:14:01.728 cpu : usr=6.20%, sys=18.30%, ctx=548, majf=0, minf=15 00:14:01.728 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:14:01.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:01.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:01.728 issued rwts: total=5632,5700,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:01.728 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:01.728 job1: (groupid=0, jobs=1): err= 0: pid=66379: Mon Dec 9 09:24:39 2024 00:14:01.728 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:14:01.728 slat (usec): min=6, max=3304, avg=84.40, stdev=306.72 00:14:01.728 clat (usec): min=1437, max=15419, avg=11452.18, stdev=1131.10 00:14:01.728 lat (usec): min=1445, max=15669, avg=11536.59, stdev=1159.32 00:14:01.728 clat percentiles (usec): 00:14:01.728 | 1.00th=[ 6783], 5.00th=[10290], 10.00th=[10814], 20.00th=[10945], 00:14:01.728 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11338], 60.00th=[11469], 00:14:01.728 | 70.00th=[11731], 80.00th=[12125], 90.00th=[12649], 95.00th=[13042], 00:14:01.728 | 99.00th=[13960], 99.50th=[14353], 99.90th=[15270], 99.95th=[15401], 00:14:01.728 | 99.99th=[15401] 00:14:01.728 write: IOPS=5628, BW=22.0MiB/s (23.1MB/s)(22.0MiB/1002msec); 0 zone resets 00:14:01.728 slat (usec): min=9, max=3556, avg=81.72, stdev=323.64 00:14:01.728 clat (usec): min=1304, max=15777, avg=11010.28, stdev=855.48 00:14:01.728 lat (usec): min=1316, max=15799, avg=11092.00, stdev=907.56 00:14:01.728 clat percentiles (usec): 00:14:01.728 | 1.00th=[ 9372], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[10552], 00:14:01.728 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10945], 60.00th=[11076], 00:14:01.728 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11994], 95.00th=[12387], 00:14:01.728 | 99.00th=[13435], 99.50th=[13960], 99.90th=[15139], 99.95th=[15270], 00:14:01.728 | 99.99th=[15795] 00:14:01.728 bw ( KiB/s): min=23792, max=23792, per=35.19%, avg=23792.00, stdev= 0.00, samples=1 00:14:01.728 iops : min= 5948, max= 5948, avg=5948.00, stdev= 0.00, samples=1 00:14:01.728 lat (msec) : 2=0.14%, 4=0.18%, 10=5.06%, 20=94.62% 00:14:01.728 cpu : usr=6.39%, sys=21.28%, ctx=407, majf=0, minf=13 00:14:01.728 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:14:01.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:01.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:01.728 issued rwts: total=5632,5640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:01.728 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:01.728 job2: (groupid=0, jobs=1): err= 0: pid=66380: Mon Dec 9 09:24:39 2024 00:14:01.728 read: IOPS=2742, BW=10.7MiB/s (11.2MB/s)(10.8MiB/1004msec) 00:14:01.728 slat (usec): min=4, max=14269, avg=204.10, stdev=1198.57 00:14:01.728 clat (usec): min=842, max=57725, avg=25141.68, stdev=10149.00 00:14:01.728 lat (usec): min=8953, max=57732, avg=25345.79, stdev=10158.91 00:14:01.728 clat percentiles (usec): 00:14:01.728 | 1.00th=[ 9372], 5.00th=[15270], 10.00th=[16581], 20.00th=[17957], 00:14:01.728 | 30.00th=[18220], 40.00th=[18482], 50.00th=[21890], 60.00th=[26084], 00:14:01.728 | 70.00th=[28705], 80.00th=[29492], 90.00th=[42206], 95.00th=[50070], 00:14:01.728 | 99.00th=[57934], 99.50th=[57934], 99.90th=[57934], 99.95th=[57934], 00:14:01.728 | 99.99th=[57934] 00:14:01.728 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:14:01.728 slat (usec): min=9, max=8328, avg=137.59, stdev=736.40 00:14:01.728 clat (usec): min=10684, max=40648, avg=18646.16, stdev=5836.25 00:14:01.728 lat (usec): min=13594, max=40669, avg=18783.75, stdev=5815.39 00:14:01.728 clat percentiles (usec): 00:14:01.728 | 1.00th=[11600], 5.00th=[13960], 10.00th=[14222], 20.00th=[14484], 00:14:01.728 | 30.00th=[14615], 40.00th=[14877], 50.00th=[16188], 60.00th=[17957], 00:14:01.728 | 70.00th=[19530], 80.00th=[22414], 90.00th=[27132], 95.00th=[32900], 00:14:01.728 | 99.00th=[40633], 99.50th=[40633], 99.90th=[40633], 99.95th=[40633], 00:14:01.728 | 99.99th=[40633] 00:14:01.728 bw ( KiB/s): min= 9480, max=15096, per=18.17%, avg=12288.00, stdev=3971.11, samples=2 00:14:01.728 iops : min= 2370, max= 3774, avg=3072.00, stdev=992.78, samples=2 00:14:01.728 lat (usec) : 1000=0.02% 00:14:01.728 lat (msec) : 10=0.55%, 20=61.80%, 50=34.95%, 100=2.68% 00:14:01.728 cpu : usr=1.60%, sys=6.08%, ctx=184, majf=0, minf=15 00:14:01.728 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:14:01.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:01.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:01.729 issued rwts: total=2753,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:01.729 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:01.729 job3: (groupid=0, jobs=1): err= 0: pid=66381: Mon Dec 9 09:24:39 2024 00:14:01.729 read: IOPS=2481, BW=9926KiB/s (10.2MB/s)(9956KiB/1003msec) 00:14:01.729 slat (usec): min=5, max=13989, avg=191.54, stdev=946.65 00:14:01.729 clat (usec): min=227, max=53011, avg=23212.43, stdev=5713.51 00:14:01.729 lat (usec): min=7916, max=53020, avg=23403.97, stdev=5784.06 00:14:01.729 clat percentiles (usec): 00:14:01.729 | 1.00th=[ 8586], 5.00th=[17171], 10.00th=[18744], 20.00th=[20579], 00:14:01.729 | 30.00th=[20841], 40.00th=[21365], 50.00th=[21627], 60.00th=[21890], 00:14:01.729 | 70.00th=[23987], 80.00th=[28181], 90.00th=[29492], 95.00th=[30278], 00:14:01.729 | 99.00th=[45876], 99.50th=[49546], 99.90th=[53216], 99.95th=[53216], 00:14:01.729 | 99.99th=[53216] 00:14:01.729 write: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec); 0 zone resets 00:14:01.729 slat (usec): min=8, max=6506, avg=194.46, stdev=773.80 00:14:01.729 clat (usec): min=10114, max=67638, avg=26890.28, stdev=14630.00 00:14:01.729 lat (usec): min=10167, max=67672, avg=27084.74, stdev=14723.91 00:14:01.729 clat percentiles (usec): 00:14:01.729 | 1.00th=[12780], 5.00th=[13960], 10.00th=[14222], 20.00th=[14484], 00:14:01.729 | 30.00th=[14746], 40.00th=[16450], 50.00th=[17433], 60.00th=[23725], 00:14:01.729 | 70.00th=[36963], 80.00th=[42730], 90.00th=[50070], 95.00th=[53740], 00:14:01.729 | 99.00th=[59507], 99.50th=[64226], 99.90th=[67634], 99.95th=[67634], 00:14:01.729 | 99.99th=[67634] 00:14:01.729 bw ( KiB/s): min= 8136, max=12368, per=15.16%, avg=10252.00, stdev=2992.48, samples=2 00:14:01.729 iops : min= 2034, max= 3092, avg=2563.00, stdev=748.12, samples=2 00:14:01.729 lat (usec) : 250=0.02% 00:14:01.729 lat (msec) : 10=1.25%, 20=35.57%, 50=57.81%, 100=5.35% 00:14:01.729 cpu : usr=2.89%, sys=9.38%, ctx=246, majf=0, minf=9 00:14:01.729 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:14:01.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:01.729 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:01.729 issued rwts: total=2489,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:01.729 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:01.729 00:14:01.729 Run status group 0 (all jobs): 00:14:01.729 READ: bw=64.2MiB/s (67.3MB/s), 9926KiB/s-22.0MiB/s (10.2MB/s-23.0MB/s), io=64.5MiB (67.6MB), run=1001-1004msec 00:14:01.729 WRITE: bw=66.0MiB/s (69.2MB/s), 9.97MiB/s-22.2MiB/s (10.5MB/s-23.3MB/s), io=66.3MiB (69.5MB), run=1001-1004msec 00:14:01.729 00:14:01.729 Disk stats (read/write): 00:14:01.729 nvme0n1: ios=4658/4897, merge=0/0, ticks=12118/11220, in_queue=23338, util=85.79% 00:14:01.729 nvme0n2: ios=4639/4882, merge=0/0, ticks=16071/13071, in_queue=29142, util=86.60% 00:14:01.729 nvme0n3: ios=2176/2560, merge=0/0, ticks=14764/10611, in_queue=25375, util=88.94% 00:14:01.729 nvme0n4: ios=2048/2311, merge=0/0, ticks=23150/26067, in_queue=49217, util=89.49% 00:14:01.729 09:24:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:14:01.729 [global] 00:14:01.729 thread=1 00:14:01.729 invalidate=1 00:14:01.729 rw=randwrite 00:14:01.729 time_based=1 00:14:01.729 runtime=1 00:14:01.729 ioengine=libaio 00:14:01.729 direct=1 00:14:01.729 bs=4096 00:14:01.729 iodepth=128 00:14:01.729 norandommap=0 00:14:01.729 numjobs=1 00:14:01.729 00:14:01.729 verify_dump=1 00:14:01.729 verify_backlog=512 00:14:01.729 verify_state_save=0 00:14:01.729 do_verify=1 00:14:01.729 verify=crc32c-intel 00:14:01.729 [job0] 00:14:01.729 filename=/dev/nvme0n1 00:14:01.729 [job1] 00:14:01.729 filename=/dev/nvme0n2 00:14:01.729 [job2] 00:14:01.729 filename=/dev/nvme0n3 00:14:01.729 [job3] 00:14:01.729 filename=/dev/nvme0n4 00:14:01.729 Could not set queue depth (nvme0n1) 00:14:01.729 Could not set queue depth (nvme0n2) 00:14:01.729 Could not set queue depth (nvme0n3) 00:14:01.729 Could not set queue depth (nvme0n4) 00:14:01.729 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:01.729 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:01.729 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:01.729 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:01.729 fio-3.35 00:14:01.729 Starting 4 threads 00:14:03.104 00:14:03.104 job0: (groupid=0, jobs=1): err= 0: pid=66434: Mon Dec 9 09:24:40 2024 00:14:03.104 read: IOPS=5048, BW=19.7MiB/s (20.7MB/s)(19.8MiB/1002msec) 00:14:03.104 slat (usec): min=4, max=4973, avg=102.03, stdev=425.98 00:14:03.104 clat (usec): min=1360, max=27914, avg=12363.39, stdev=2541.33 00:14:03.104 lat (usec): min=1367, max=27928, avg=12465.42, stdev=2572.34 00:14:03.104 clat percentiles (usec): 00:14:03.104 | 1.00th=[ 5407], 5.00th=[ 9503], 10.00th=[10159], 20.00th=[11076], 00:14:03.104 | 30.00th=[11731], 40.00th=[11994], 50.00th=[11994], 60.00th=[12125], 00:14:03.104 | 70.00th=[12256], 80.00th=[13566], 90.00th=[14484], 95.00th=[15795], 00:14:03.104 | 99.00th=[23987], 99.50th=[24511], 99.90th=[25035], 99.95th=[26608], 00:14:03.104 | 99.99th=[27919] 00:14:03.104 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:14:03.104 slat (usec): min=9, max=4306, avg=89.36, stdev=266.41 00:14:03.104 clat (usec): min=8592, max=18074, avg=12521.27, stdev=1203.97 00:14:03.104 lat (usec): min=8605, max=18097, avg=12610.63, stdev=1208.44 00:14:03.104 clat percentiles (usec): 00:14:03.104 | 1.00th=[ 9503], 5.00th=[10945], 10.00th=[11600], 20.00th=[11863], 00:14:03.104 | 30.00th=[11994], 40.00th=[12125], 50.00th=[12256], 60.00th=[12387], 00:14:03.104 | 70.00th=[12518], 80.00th=[13304], 90.00th=[14222], 95.00th=[15008], 00:14:03.104 | 99.00th=[15795], 99.50th=[16712], 99.90th=[17695], 99.95th=[17957], 00:14:03.104 | 99.99th=[17957] 00:14:03.104 bw ( KiB/s): min=20480, max=20480, per=25.82%, avg=20480.00, stdev= 0.00, samples=2 00:14:03.104 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:14:03.104 lat (msec) : 2=0.10%, 4=0.22%, 10=5.08%, 20=93.34%, 50=1.27% 00:14:03.104 cpu : usr=3.00%, sys=8.49%, ctx=820, majf=0, minf=9 00:14:03.104 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:14:03.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:03.104 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:03.104 issued rwts: total=5059,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:03.104 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:03.104 job1: (groupid=0, jobs=1): err= 0: pid=66435: Mon Dec 9 09:24:40 2024 00:14:03.104 read: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec) 00:14:03.104 slat (usec): min=11, max=2602, avg=89.22, stdev=342.34 00:14:03.104 clat (usec): min=9536, max=13558, avg=12376.74, stdev=500.58 00:14:03.104 lat (usec): min=10505, max=14147, avg=12465.96, stdev=377.14 00:14:03.104 clat percentiles (usec): 00:14:03.104 | 1.00th=[10290], 5.00th=[11600], 10.00th=[11863], 20.00th=[12125], 00:14:03.104 | 30.00th=[12256], 40.00th=[12256], 50.00th=[12387], 60.00th=[12518], 00:14:03.104 | 70.00th=[12649], 80.00th=[12780], 90.00th=[12911], 95.00th=[13042], 00:14:03.104 | 99.00th=[13304], 99.50th=[13435], 99.90th=[13566], 99.95th=[13566], 00:14:03.104 | 99.99th=[13566] 00:14:03.104 write: IOPS=5396, BW=21.1MiB/s (22.1MB/s)(21.1MiB/1001msec); 0 zone resets 00:14:03.104 slat (usec): min=15, max=2403, avg=88.01, stdev=300.24 00:14:03.104 clat (usec): min=119, max=13263, avg=11683.42, stdev=1009.79 00:14:03.104 lat (usec): min=2116, max=14065, avg=11771.44, stdev=974.50 00:14:03.104 clat percentiles (usec): 00:14:03.104 | 1.00th=[ 5735], 5.00th=[10814], 10.00th=[11338], 20.00th=[11469], 00:14:03.104 | 30.00th=[11600], 40.00th=[11731], 50.00th=[11731], 60.00th=[11863], 00:14:03.104 | 70.00th=[11994], 80.00th=[12125], 90.00th=[12256], 95.00th=[12387], 00:14:03.104 | 99.00th=[13042], 99.50th=[13173], 99.90th=[13173], 99.95th=[13304], 00:14:03.104 | 99.99th=[13304] 00:14:03.104 bw ( KiB/s): min=20529, max=21704, per=26.62%, avg=21116.50, stdev=830.85, samples=2 00:14:03.104 iops : min= 5132, max= 5426, avg=5279.00, stdev=207.89, samples=2 00:14:03.104 lat (usec) : 250=0.01% 00:14:03.104 lat (msec) : 4=0.30%, 10=1.25%, 20=98.43% 00:14:03.104 cpu : usr=7.10%, sys=22.10%, ctx=431, majf=0, minf=10 00:14:03.104 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:14:03.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:03.104 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:03.104 issued rwts: total=5120,5402,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:03.104 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:03.104 job2: (groupid=0, jobs=1): err= 0: pid=66436: Mon Dec 9 09:24:40 2024 00:14:03.104 read: IOPS=4152, BW=16.2MiB/s (17.0MB/s)(16.3MiB/1002msec) 00:14:03.104 slat (usec): min=18, max=3102, avg=109.12, stdev=425.46 00:14:03.104 clat (usec): min=153, max=17460, avg=14640.79, stdev=1396.23 00:14:03.104 lat (usec): min=3034, max=17505, avg=14749.91, stdev=1335.71 00:14:03.104 clat percentiles (usec): 00:14:03.105 | 1.00th=[ 6980], 5.00th=[13173], 10.00th=[13960], 20.00th=[14353], 00:14:03.105 | 30.00th=[14615], 40.00th=[14746], 50.00th=[14877], 60.00th=[14877], 00:14:03.105 | 70.00th=[15008], 80.00th=[15270], 90.00th=[15533], 95.00th=[15926], 00:14:03.105 | 99.00th=[16450], 99.50th=[16712], 99.90th=[16712], 99.95th=[17171], 00:14:03.105 | 99.99th=[17433] 00:14:03.105 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:14:03.105 slat (usec): min=16, max=5700, avg=106.74, stdev=405.86 00:14:03.105 clat (usec): min=9833, max=17858, avg=14237.80, stdev=957.04 00:14:03.105 lat (usec): min=9857, max=17892, avg=14344.54, stdev=902.13 00:14:03.105 clat percentiles (usec): 00:14:03.105 | 1.00th=[11469], 5.00th=[13173], 10.00th=[13435], 20.00th=[13698], 00:14:03.105 | 30.00th=[13960], 40.00th=[14091], 50.00th=[14091], 60.00th=[14353], 00:14:03.105 | 70.00th=[14484], 80.00th=[14746], 90.00th=[15008], 95.00th=[15795], 00:14:03.105 | 99.00th=[17695], 99.50th=[17695], 99.90th=[17957], 99.95th=[17957], 00:14:03.105 | 99.99th=[17957] 00:14:03.105 bw ( KiB/s): min=17466, max=18928, per=22.94%, avg=18197.00, stdev=1033.79, samples=2 00:14:03.105 iops : min= 4366, max= 4732, avg=4549.00, stdev=258.80, samples=2 00:14:03.105 lat (usec) : 250=0.01% 00:14:03.105 lat (msec) : 4=0.34%, 10=0.44%, 20=99.20% 00:14:03.105 cpu : usr=5.69%, sys=18.78%, ctx=397, majf=0, minf=15 00:14:03.105 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:14:03.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:03.105 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:03.105 issued rwts: total=4161,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:03.105 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:03.105 job3: (groupid=0, jobs=1): err= 0: pid=66437: Mon Dec 9 09:24:40 2024 00:14:03.105 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:14:03.105 slat (usec): min=5, max=3877, avg=106.60, stdev=540.34 00:14:03.105 clat (usec): min=10206, max=16115, avg=13910.57, stdev=740.04 00:14:03.105 lat (usec): min=12916, max=16124, avg=14017.17, stdev=519.09 00:14:03.105 clat percentiles (usec): 00:14:03.105 | 1.00th=[10683], 5.00th=[13173], 10.00th=[13435], 20.00th=[13566], 00:14:03.105 | 30.00th=[13698], 40.00th=[13829], 50.00th=[13960], 60.00th=[14091], 00:14:03.105 | 70.00th=[14091], 80.00th=[14353], 90.00th=[14615], 95.00th=[14877], 00:14:03.105 | 99.00th=[16057], 99.50th=[16057], 99.90th=[16057], 99.95th=[16057], 00:14:03.105 | 99.99th=[16057] 00:14:03.105 write: IOPS=4732, BW=18.5MiB/s (19.4MB/s)(18.5MiB/1001msec); 0 zone resets 00:14:03.105 slat (usec): min=9, max=3266, avg=101.59, stdev=479.18 00:14:03.105 clat (usec): min=389, max=14780, avg=13161.25, stdev=1221.14 00:14:03.105 lat (usec): min=3039, max=14802, avg=13262.84, stdev=1120.19 00:14:03.105 clat percentiles (usec): 00:14:03.105 | 1.00th=[ 6718], 5.00th=[11076], 10.00th=[12780], 20.00th=[12911], 00:14:03.105 | 30.00th=[13042], 40.00th=[13173], 50.00th=[13304], 60.00th=[13435], 00:14:03.105 | 70.00th=[13566], 80.00th=[13698], 90.00th=[13960], 95.00th=[14091], 00:14:03.105 | 99.00th=[14615], 99.50th=[14746], 99.90th=[14746], 99.95th=[14746], 00:14:03.105 | 99.99th=[14746] 00:14:03.105 bw ( KiB/s): min=17152, max=19759, per=23.27%, avg=18455.50, stdev=1843.43, samples=2 00:14:03.105 iops : min= 4288, max= 4939, avg=4613.50, stdev=460.33, samples=2 00:14:03.105 lat (usec) : 500=0.01% 00:14:03.105 lat (msec) : 4=0.34%, 10=0.55%, 20=99.10% 00:14:03.105 cpu : usr=2.30%, sys=9.49%, ctx=323, majf=0, minf=17 00:14:03.105 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:14:03.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:03.105 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:03.105 issued rwts: total=4608,4737,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:03.105 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:03.105 00:14:03.105 Run status group 0 (all jobs): 00:14:03.105 READ: bw=73.9MiB/s (77.5MB/s), 16.2MiB/s-20.0MiB/s (17.0MB/s-20.9MB/s), io=74.0MiB (77.6MB), run=1001-1002msec 00:14:03.105 WRITE: bw=77.5MiB/s (81.2MB/s), 18.0MiB/s-21.1MiB/s (18.8MB/s-22.1MB/s), io=77.6MiB (81.4MB), run=1001-1002msec 00:14:03.105 00:14:03.105 Disk stats (read/write): 00:14:03.105 nvme0n1: ios=4230/4608, merge=0/0, ticks=17743/17704, in_queue=35447, util=88.48% 00:14:03.105 nvme0n2: ios=4561/4608, merge=0/0, ticks=12111/10630, in_queue=22741, util=89.81% 00:14:03.105 nvme0n3: ios=3634/4002, merge=0/0, ticks=11685/11420, in_queue=23105, util=91.18% 00:14:03.105 nvme0n4: ios=4017/4096, merge=0/0, ticks=13305/12288, in_queue=25593, util=90.52% 00:14:03.105 09:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:14:03.105 09:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=66450 00:14:03.105 09:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:14:03.105 09:24:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:14:03.105 [global] 00:14:03.105 thread=1 00:14:03.105 invalidate=1 00:14:03.105 rw=read 00:14:03.105 time_based=1 00:14:03.105 runtime=10 00:14:03.105 ioengine=libaio 00:14:03.105 direct=1 00:14:03.105 bs=4096 00:14:03.105 iodepth=1 00:14:03.105 norandommap=1 00:14:03.105 numjobs=1 00:14:03.105 00:14:03.105 [job0] 00:14:03.105 filename=/dev/nvme0n1 00:14:03.105 [job1] 00:14:03.105 filename=/dev/nvme0n2 00:14:03.105 [job2] 00:14:03.105 filename=/dev/nvme0n3 00:14:03.105 [job3] 00:14:03.105 filename=/dev/nvme0n4 00:14:03.105 Could not set queue depth (nvme0n1) 00:14:03.105 Could not set queue depth (nvme0n2) 00:14:03.105 Could not set queue depth (nvme0n3) 00:14:03.105 Could not set queue depth (nvme0n4) 00:14:03.364 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:03.364 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:03.364 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:03.364 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:03.364 fio-3.35 00:14:03.364 Starting 4 threads 00:14:06.664 09:24:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:14:06.664 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=47042560, buflen=4096 00:14:06.664 fio: pid=66499, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:14:06.664 09:24:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:14:06.664 fio: pid=66498, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:14:06.664 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=80531456, buflen=4096 00:14:06.664 09:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:06.664 09:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:14:06.938 fio: pid=66496, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:14:06.938 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=59256832, buflen=4096 00:14:06.938 09:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:06.938 09:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:14:06.938 fio: pid=66497, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:14:06.938 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=31006720, buflen=4096 00:14:06.938 00:14:06.938 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66496: Mon Dec 9 09:24:44 2024 00:14:06.938 read: IOPS=4232, BW=16.5MiB/s (17.3MB/s)(56.5MiB/3418msec) 00:14:06.938 slat (usec): min=7, max=14801, avg=13.05, stdev=190.07 00:14:06.938 clat (usec): min=102, max=3068, avg=222.25, stdev=65.88 00:14:06.938 lat (usec): min=109, max=14989, avg=235.31, stdev=200.64 00:14:06.938 clat percentiles (usec): 00:14:06.938 | 1.00th=[ 122], 5.00th=[ 135], 10.00th=[ 141], 20.00th=[ 155], 00:14:06.938 | 30.00th=[ 223], 40.00th=[ 233], 50.00th=[ 239], 60.00th=[ 243], 00:14:06.938 | 70.00th=[ 249], 80.00th=[ 255], 90.00th=[ 265], 95.00th=[ 273], 00:14:06.938 | 99.00th=[ 347], 99.50th=[ 441], 99.90th=[ 840], 99.95th=[ 947], 00:14:06.938 | 99.99th=[ 2073] 00:14:06.938 bw ( KiB/s): min=15424, max=17832, per=21.10%, avg=16076.00, stdev=885.01, samples=6 00:14:06.938 iops : min= 3856, max= 4458, avg=4019.00, stdev=221.25, samples=6 00:14:06.938 lat (usec) : 250=72.97%, 500=26.62%, 750=0.26%, 1000=0.11% 00:14:06.938 lat (msec) : 2=0.02%, 4=0.01% 00:14:06.938 cpu : usr=0.88%, sys=3.83%, ctx=14477, majf=0, minf=1 00:14:06.938 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.938 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.938 issued rwts: total=14468,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.938 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:06.938 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66497: Mon Dec 9 09:24:44 2024 00:14:06.938 read: IOPS=6559, BW=25.6MiB/s (26.9MB/s)(93.6MiB/3652msec) 00:14:06.938 slat (usec): min=7, max=10491, avg=10.74, stdev=136.24 00:14:06.938 clat (usec): min=95, max=4475, avg=140.95, stdev=71.23 00:14:06.938 lat (usec): min=104, max=10632, avg=151.69, stdev=153.99 00:14:06.938 clat percentiles (usec): 00:14:06.938 | 1.00th=[ 108], 5.00th=[ 119], 10.00th=[ 126], 20.00th=[ 131], 00:14:06.938 | 30.00th=[ 135], 40.00th=[ 137], 50.00th=[ 139], 60.00th=[ 143], 00:14:06.938 | 70.00th=[ 145], 80.00th=[ 149], 90.00th=[ 155], 95.00th=[ 159], 00:14:06.938 | 99.00th=[ 172], 99.50th=[ 178], 99.90th=[ 392], 99.95th=[ 1532], 00:14:06.938 | 99.99th=[ 3949] 00:14:06.938 bw ( KiB/s): min=24600, max=27064, per=34.51%, avg=26296.00, stdev=818.18, samples=7 00:14:06.938 iops : min= 6150, max= 6766, avg=6574.00, stdev=204.55, samples=7 00:14:06.938 lat (usec) : 100=0.05%, 250=99.82%, 500=0.04%, 750=0.03%, 1000=0.01% 00:14:06.938 lat (msec) : 2=0.02%, 4=0.03%, 10=0.01% 00:14:06.938 cpu : usr=1.45%, sys=5.20%, ctx=23963, majf=0, minf=2 00:14:06.938 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.938 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.938 issued rwts: total=23955,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.939 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:06.939 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66498: Mon Dec 9 09:24:44 2024 00:14:06.939 read: IOPS=6115, BW=23.9MiB/s (25.0MB/s)(76.8MiB/3215msec) 00:14:06.939 slat (usec): min=6, max=11478, avg= 9.71, stdev=99.05 00:14:06.939 clat (usec): min=110, max=3310, avg=152.99, stdev=36.09 00:14:06.939 lat (usec): min=119, max=11641, avg=162.70, stdev=105.51 00:14:06.939 clat percentiles (usec): 00:14:06.939 | 1.00th=[ 130], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 141], 00:14:06.939 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 149], 60.00th=[ 151], 00:14:06.939 | 70.00th=[ 155], 80.00th=[ 159], 90.00th=[ 169], 95.00th=[ 186], 00:14:06.939 | 99.00th=[ 239], 99.50th=[ 247], 99.90th=[ 269], 99.95th=[ 445], 00:14:06.939 | 99.99th=[ 1647] 00:14:06.939 bw ( KiB/s): min=21840, max=25752, per=32.32%, avg=24630.67, stdev=1453.72, samples=6 00:14:06.939 iops : min= 5460, max= 6438, avg=6157.67, stdev=363.43, samples=6 00:14:06.939 lat (usec) : 250=99.64%, 500=0.32%, 750=0.01%, 1000=0.01% 00:14:06.939 lat (msec) : 2=0.02%, 4=0.01% 00:14:06.939 cpu : usr=1.21%, sys=5.07%, ctx=19671, majf=0, minf=1 00:14:06.939 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.939 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.939 issued rwts: total=19662,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.939 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:06.939 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66499: Mon Dec 9 09:24:44 2024 00:14:06.939 read: IOPS=3853, BW=15.0MiB/s (15.8MB/s)(44.9MiB/2981msec) 00:14:06.939 slat (usec): min=6, max=102, avg=10.66, stdev= 3.41 00:14:06.939 clat (usec): min=132, max=5947, avg=247.85, stdev=125.30 00:14:06.939 lat (usec): min=140, max=5955, avg=258.51, stdev=125.57 00:14:06.939 clat percentiles (usec): 00:14:06.939 | 1.00th=[ 204], 5.00th=[ 217], 10.00th=[ 223], 20.00th=[ 229], 00:14:06.939 | 30.00th=[ 235], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 247], 00:14:06.939 | 70.00th=[ 251], 80.00th=[ 255], 90.00th=[ 265], 95.00th=[ 273], 00:14:06.939 | 99.00th=[ 343], 99.50th=[ 404], 99.90th=[ 1074], 99.95th=[ 3982], 00:14:06.939 | 99.99th=[ 5538] 00:14:06.939 bw ( KiB/s): min=14992, max=15824, per=20.14%, avg=15344.00, stdev=336.62, samples=5 00:14:06.939 iops : min= 3748, max= 3956, avg=3836.00, stdev=84.15, samples=5 00:14:06.939 lat (usec) : 250=69.37%, 500=30.29%, 750=0.17%, 1000=0.03% 00:14:06.939 lat (msec) : 2=0.03%, 4=0.04%, 10=0.04% 00:14:06.939 cpu : usr=0.87%, sys=3.79%, ctx=11490, majf=0, minf=2 00:14:06.939 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.939 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.939 issued rwts: total=11486,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.939 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:06.939 00:14:06.939 Run status group 0 (all jobs): 00:14:06.939 READ: bw=74.4MiB/s (78.0MB/s), 15.0MiB/s-25.6MiB/s (15.8MB/s-26.9MB/s), io=272MiB (285MB), run=2981-3652msec 00:14:06.939 00:14:06.939 Disk stats (read/write): 00:14:06.939 nvme0n1: ios=14180/0, merge=0/0, ticks=3175/0, in_queue=3175, util=95.28% 00:14:06.939 nvme0n2: ios=23741/0, merge=0/0, ticks=3362/0, in_queue=3362, util=95.43% 00:14:06.939 nvme0n3: ios=19087/0, merge=0/0, ticks=2948/0, in_queue=2948, util=96.31% 00:14:06.939 nvme0n4: ios=11035/0, merge=0/0, ticks=2743/0, in_queue=2743, util=96.50% 00:14:07.197 09:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:07.197 09:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:14:07.197 09:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:07.197 09:24:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:14:07.455 09:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:07.455 09:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:14:07.712 09:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:07.713 09:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:14:07.969 09:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:07.969 09:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:14:08.225 09:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:14:08.225 09:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 66450 00:14:08.225 09:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:14:08.225 09:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:08.225 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.225 09:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:08.225 09:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:14:08.225 09:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:08.225 09:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:08.225 09:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:08.225 09:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:08.225 nvmf hotplug test: fio failed as expected 00:14:08.225 09:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:14:08.225 09:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:14:08.225 09:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:14:08.225 09:24:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:08.483 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:14:08.483 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:14:08.483 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:14:08.483 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:14:08.483 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:14:08.483 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:08.483 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:14:08.483 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:08.483 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:14:08.483 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:08.483 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:08.483 rmmod nvme_tcp 00:14:08.484 rmmod nvme_fabrics 00:14:08.484 rmmod nvme_keyring 00:14:08.484 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:08.484 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:14:08.484 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:14:08.484 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 66068 ']' 00:14:08.484 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 66068 00:14:08.484 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 66068 ']' 00:14:08.484 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 66068 00:14:08.484 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:14:08.484 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:08.484 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66068 00:14:08.742 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:08.742 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:08.742 killing process with pid 66068 00:14:08.742 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66068' 00:14:08.742 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 66068 00:14:08.742 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 66068 00:14:08.742 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:08.742 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:08.742 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:08.742 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:14:08.742 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:14:08.742 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:14:08.742 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:08.742 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:08.742 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:08.742 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:08.742 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:08.742 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:08.742 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:08.999 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:08.999 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:08.999 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:08.999 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:08.999 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:08.999 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:08.999 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:08.999 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:08.999 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:08.999 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:08.999 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:08.999 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:08.999 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.999 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:14:08.999 00:14:08.999 real 0m19.790s 00:14:08.999 user 1m12.668s 00:14:08.999 sys 0m11.112s 00:14:08.999 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:08.999 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.999 ************************************ 00:14:08.999 END TEST nvmf_fio_target 00:14:08.999 ************************************ 00:14:09.257 09:24:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:09.257 09:24:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:09.257 09:24:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:09.257 09:24:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:09.257 ************************************ 00:14:09.257 START TEST nvmf_bdevio 00:14:09.257 ************************************ 00:14:09.257 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:09.257 * Looking for test storage... 00:14:09.258 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:09.258 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:09.258 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:09.258 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:14:09.258 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:09.258 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:09.258 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:09.258 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:09.258 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:14:09.258 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:14:09.517 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:14:09.517 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:14:09.517 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:14:09.517 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:14:09.517 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:14:09.517 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:09.517 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:14:09.517 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:14:09.517 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:09.517 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:09.517 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:14:09.517 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:14:09.517 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:09.517 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:14:09.517 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:14:09.517 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:14:09.517 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:14:09.517 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:09.517 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:14:09.517 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:14:09.517 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:09.517 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:09.517 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:14:09.517 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:09.517 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:09.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.517 --rc genhtml_branch_coverage=1 00:14:09.517 --rc genhtml_function_coverage=1 00:14:09.517 --rc genhtml_legend=1 00:14:09.517 --rc geninfo_all_blocks=1 00:14:09.517 --rc geninfo_unexecuted_blocks=1 00:14:09.517 00:14:09.517 ' 00:14:09.517 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:09.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.517 --rc genhtml_branch_coverage=1 00:14:09.517 --rc genhtml_function_coverage=1 00:14:09.517 --rc genhtml_legend=1 00:14:09.517 --rc geninfo_all_blocks=1 00:14:09.517 --rc geninfo_unexecuted_blocks=1 00:14:09.517 00:14:09.517 ' 00:14:09.518 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:09.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.518 --rc genhtml_branch_coverage=1 00:14:09.518 --rc genhtml_function_coverage=1 00:14:09.518 --rc genhtml_legend=1 00:14:09.518 --rc geninfo_all_blocks=1 00:14:09.518 --rc geninfo_unexecuted_blocks=1 00:14:09.518 00:14:09.518 ' 00:14:09.518 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:09.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.518 --rc genhtml_branch_coverage=1 00:14:09.518 --rc genhtml_function_coverage=1 00:14:09.518 --rc genhtml_legend=1 00:14:09.518 --rc geninfo_all_blocks=1 00:14:09.518 --rc geninfo_unexecuted_blocks=1 00:14:09.518 00:14:09.518 ' 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:09.518 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:09.518 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:09.519 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:09.519 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:09.519 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:09.519 Cannot find device "nvmf_init_br" 00:14:09.519 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:14:09.519 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:09.519 Cannot find device "nvmf_init_br2" 00:14:09.519 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:14:09.519 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:09.519 Cannot find device "nvmf_tgt_br" 00:14:09.519 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:14:09.519 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:09.519 Cannot find device "nvmf_tgt_br2" 00:14:09.519 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:14:09.519 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:09.519 Cannot find device "nvmf_init_br" 00:14:09.519 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:14:09.519 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:09.519 Cannot find device "nvmf_init_br2" 00:14:09.519 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:14:09.519 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:09.519 Cannot find device "nvmf_tgt_br" 00:14:09.519 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:14:09.519 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:09.519 Cannot find device "nvmf_tgt_br2" 00:14:09.519 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:14:09.519 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:09.519 Cannot find device "nvmf_br" 00:14:09.519 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:14:09.519 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:09.776 Cannot find device "nvmf_init_if" 00:14:09.776 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:14:09.776 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:09.776 Cannot find device "nvmf_init_if2" 00:14:09.776 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:14:09.776 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:09.776 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:09.776 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:14:09.776 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:09.776 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:09.776 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:14:09.776 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:09.776 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:09.776 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:09.776 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:09.776 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:09.776 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:09.776 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:09.776 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:09.776 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:09.776 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:09.776 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:09.776 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:09.776 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:09.776 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:09.776 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:09.776 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:09.776 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:09.776 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:09.776 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:09.776 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:09.777 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:09.777 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:09.777 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:09.777 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:09.777 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:09.777 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:10.035 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:10.035 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:10.035 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:10.035 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:10.035 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:10.035 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:10.035 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:10.035 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:10.035 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:14:10.035 00:14:10.035 --- 10.0.0.3 ping statistics --- 00:14:10.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.035 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:14:10.035 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:10.035 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:10.035 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.038 ms 00:14:10.035 00:14:10.035 --- 10.0.0.4 ping statistics --- 00:14:10.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.035 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:14:10.035 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:10.035 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:10.035 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:14:10.035 00:14:10.035 --- 10.0.0.1 ping statistics --- 00:14:10.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.035 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:14:10.035 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:10.035 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:10.035 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:14:10.035 00:14:10.035 --- 10.0.0.2 ping statistics --- 00:14:10.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.035 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:14:10.035 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:10.035 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:14:10.035 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:10.035 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:10.035 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:10.035 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:10.035 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:10.035 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:10.035 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:10.035 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:10.035 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:10.035 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:10.035 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:10.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.035 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=66827 00:14:10.035 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 66827 00:14:10.035 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 66827 ']' 00:14:10.035 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.035 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:10.035 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.035 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:14:10.035 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:10.035 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:10.035 [2024-12-09 09:24:47.645533] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:14:10.035 [2024-12-09 09:24:47.645589] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:10.294 [2024-12-09 09:24:47.783045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:10.294 [2024-12-09 09:24:47.841758] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:10.294 [2024-12-09 09:24:47.842015] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:10.294 [2024-12-09 09:24:47.842380] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:10.294 [2024-12-09 09:24:47.842724] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:10.294 [2024-12-09 09:24:47.842861] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:10.294 [2024-12-09 09:24:47.843988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:14:10.294 [2024-12-09 09:24:47.844018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:14:10.294 [2024-12-09 09:24:47.844208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:10.294 [2024-12-09 09:24:47.844210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:14:10.294 [2024-12-09 09:24:47.887947] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:10.864 09:24:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:10.864 09:24:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:14:10.864 09:24:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:10.864 09:24:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:10.864 09:24:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:11.123 09:24:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:11.123 09:24:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:11.123 09:24:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.123 09:24:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:11.123 [2024-12-09 09:24:48.638429] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:11.123 09:24:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.123 09:24:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:11.123 09:24:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.123 09:24:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:11.123 Malloc0 00:14:11.123 09:24:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.123 09:24:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:11.123 09:24:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.123 09:24:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:11.123 09:24:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.123 09:24:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:11.123 09:24:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.123 09:24:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:11.123 09:24:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.123 09:24:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:11.123 09:24:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.123 09:24:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:11.123 [2024-12-09 09:24:48.698180] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:11.123 09:24:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.123 09:24:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:14:11.123 09:24:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:11.123 09:24:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:14:11.123 09:24:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:14:11.123 09:24:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:14:11.123 09:24:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:14:11.123 { 00:14:11.123 "params": { 00:14:11.123 "name": "Nvme$subsystem", 00:14:11.123 "trtype": "$TEST_TRANSPORT", 00:14:11.123 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:11.123 "adrfam": "ipv4", 00:14:11.123 "trsvcid": "$NVMF_PORT", 00:14:11.123 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:11.123 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:11.123 "hdgst": ${hdgst:-false}, 00:14:11.123 "ddgst": ${ddgst:-false} 00:14:11.123 }, 00:14:11.123 "method": "bdev_nvme_attach_controller" 00:14:11.123 } 00:14:11.123 EOF 00:14:11.123 )") 00:14:11.123 09:24:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:14:11.123 09:24:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:14:11.123 09:24:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:14:11.123 09:24:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:14:11.123 "params": { 00:14:11.123 "name": "Nvme1", 00:14:11.123 "trtype": "tcp", 00:14:11.123 "traddr": "10.0.0.3", 00:14:11.123 "adrfam": "ipv4", 00:14:11.123 "trsvcid": "4420", 00:14:11.123 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:11.123 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:11.123 "hdgst": false, 00:14:11.123 "ddgst": false 00:14:11.123 }, 00:14:11.123 "method": "bdev_nvme_attach_controller" 00:14:11.123 }' 00:14:11.123 [2024-12-09 09:24:48.755026] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:14:11.123 [2024-12-09 09:24:48.755204] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66863 ] 00:14:11.384 [2024-12-09 09:24:48.914078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:11.384 [2024-12-09 09:24:48.963913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:11.384 [2024-12-09 09:24:48.964110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.384 [2024-12-09 09:24:48.964111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:11.384 [2024-12-09 09:24:49.015254] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:11.651 I/O targets: 00:14:11.651 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:11.651 00:14:11.651 00:14:11.651 CUnit - A unit testing framework for C - Version 2.1-3 00:14:11.651 http://cunit.sourceforge.net/ 00:14:11.651 00:14:11.651 00:14:11.651 Suite: bdevio tests on: Nvme1n1 00:14:11.651 Test: blockdev write read block ...passed 00:14:11.651 Test: blockdev write zeroes read block ...passed 00:14:11.651 Test: blockdev write zeroes read no split ...passed 00:14:11.651 Test: blockdev write zeroes read split ...passed 00:14:11.651 Test: blockdev write zeroes read split partial ...passed 00:14:11.651 Test: blockdev reset ...[2024-12-09 09:24:49.153601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:14:11.651 [2024-12-09 09:24:49.153698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc4b80 (9): Bad file descriptor 00:14:11.651 passed 00:14:11.651 Test: blockdev write read 8 blocks ...[2024-12-09 09:24:49.173824] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:14:11.651 passed 00:14:11.651 Test: blockdev write read size > 128k ...passed 00:14:11.651 Test: blockdev write read invalid size ...passed 00:14:11.651 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:11.651 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:11.651 Test: blockdev write read max offset ...passed 00:14:11.651 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:11.651 Test: blockdev writev readv 8 blocks ...passed 00:14:11.651 Test: blockdev writev readv 30 x 1block ...passed 00:14:11.651 Test: blockdev writev readv block ...passed 00:14:11.651 Test: blockdev writev readv size > 128k ...passed 00:14:11.651 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:11.651 Test: blockdev comparev and writev ...[2024-12-09 09:24:49.181619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:11.651 [2024-12-09 09:24:49.181772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:11.651 [2024-12-09 09:24:49.181797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:11.651 [2024-12-09 09:24:49.181824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:11.651 [2024-12-09 09:24:49.182087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:11.651 [2024-12-09 09:24:49.182100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:11.651 [2024-12-09 09:24:49.182115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:11.651 [2024-12-09 09:24:49.182125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:11.651 [2024-12-09 09:24:49.182386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:11.651 [2024-12-09 09:24:49.182399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:11.651 [2024-12-09 09:24:49.182413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:11.651 [2024-12-09 09:24:49.182422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:11.651 [2024-12-09 09:24:49.182683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:11.651 [2024-12-09 09:24:49.182696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:11.651 [2024-12-09 09:24:49.182710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:11.651 [2024-12-09 09:24:49.182720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:11.651 passed 00:14:11.651 Test: blockdev nvme passthru rw ...passed 00:14:11.652 Test: blockdev nvme passthru vendor specific ...[2024-12-09 09:24:49.183663] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:11.652 [2024-12-09 09:24:49.183679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:11.652 [2024-12-09 09:24:49.183769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:11.652 [2024-12-09 09:24:49.183781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:11.652 [2024-12-09 09:24:49.183859] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:11.652 [2024-12-09 09:24:49.183871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:11.652 [2024-12-09 09:24:49.183949] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:11.652 [2024-12-09 09:24:49.183960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:11.652 passed 00:14:11.652 Test: blockdev nvme admin passthru ...passed 00:14:11.652 Test: blockdev copy ...passed 00:14:11.652 00:14:11.652 Run Summary: Type Total Ran Passed Failed Inactive 00:14:11.652 suites 1 1 n/a 0 0 00:14:11.652 tests 23 23 23 0 0 00:14:11.652 asserts 152 152 152 0 n/a 00:14:11.652 00:14:11.652 Elapsed time = 0.162 seconds 00:14:11.652 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:11.652 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.652 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:11.652 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.652 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:11.652 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:14:11.652 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:11.652 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:14:11.911 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:11.911 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:14:11.911 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:11.911 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:11.911 rmmod nvme_tcp 00:14:11.911 rmmod nvme_fabrics 00:14:11.911 rmmod nvme_keyring 00:14:11.911 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:11.911 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:14:11.911 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:14:11.911 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 66827 ']' 00:14:11.911 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 66827 00:14:11.911 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 66827 ']' 00:14:11.911 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 66827 00:14:11.911 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:14:11.911 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:11.911 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66827 00:14:11.911 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:14:11.911 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:14:11.911 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66827' 00:14:11.911 killing process with pid 66827 00:14:11.911 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 66827 00:14:11.911 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 66827 00:14:12.170 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:12.170 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:12.170 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:12.170 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:14:12.170 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:12.170 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:14:12.170 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:14:12.170 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:12.170 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:12.170 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:12.170 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:12.170 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:12.170 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:12.170 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:12.170 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:12.170 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:12.170 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:12.170 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:12.170 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:12.170 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:12.429 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:12.429 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:12.429 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:12.429 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.429 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:12.429 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:12.429 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:14:12.429 00:14:12.429 real 0m3.223s 00:14:12.429 user 0m8.841s 00:14:12.429 sys 0m1.056s 00:14:12.429 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:12.429 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:12.429 ************************************ 00:14:12.429 END TEST nvmf_bdevio 00:14:12.429 ************************************ 00:14:12.429 09:24:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:14:12.429 00:14:12.429 real 2m36.577s 00:14:12.429 user 6m36.364s 00:14:12.429 sys 1m2.271s 00:14:12.429 09:24:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:12.429 ************************************ 00:14:12.429 END TEST nvmf_target_core 00:14:12.429 09:24:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:12.429 ************************************ 00:14:12.429 09:24:50 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:14:12.429 09:24:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:12.429 09:24:50 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:12.429 09:24:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:12.429 ************************************ 00:14:12.429 START TEST nvmf_target_extra 00:14:12.429 ************************************ 00:14:12.429 09:24:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:14:12.690 * Looking for test storage... 00:14:12.690 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:12.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.690 --rc genhtml_branch_coverage=1 00:14:12.690 --rc genhtml_function_coverage=1 00:14:12.690 --rc genhtml_legend=1 00:14:12.690 --rc geninfo_all_blocks=1 00:14:12.690 --rc geninfo_unexecuted_blocks=1 00:14:12.690 00:14:12.690 ' 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:12.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.690 --rc genhtml_branch_coverage=1 00:14:12.690 --rc genhtml_function_coverage=1 00:14:12.690 --rc genhtml_legend=1 00:14:12.690 --rc geninfo_all_blocks=1 00:14:12.690 --rc geninfo_unexecuted_blocks=1 00:14:12.690 00:14:12.690 ' 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:12.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.690 --rc genhtml_branch_coverage=1 00:14:12.690 --rc genhtml_function_coverage=1 00:14:12.690 --rc genhtml_legend=1 00:14:12.690 --rc geninfo_all_blocks=1 00:14:12.690 --rc geninfo_unexecuted_blocks=1 00:14:12.690 00:14:12.690 ' 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:12.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.690 --rc genhtml_branch_coverage=1 00:14:12.690 --rc genhtml_function_coverage=1 00:14:12.690 --rc genhtml_legend=1 00:14:12.690 --rc geninfo_all_blocks=1 00:14:12.690 --rc geninfo_unexecuted_blocks=1 00:14:12.690 00:14:12.690 ' 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:12.690 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:14:12.690 09:24:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:14:12.691 09:24:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:14:12.691 09:24:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:12.691 09:24:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:12.691 09:24:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:12.691 09:24:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:12.691 ************************************ 00:14:12.691 START TEST nvmf_auth_target 00:14:12.691 ************************************ 00:14:12.691 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:12.951 * Looking for test storage... 00:14:12.951 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:12.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.951 --rc genhtml_branch_coverage=1 00:14:12.951 --rc genhtml_function_coverage=1 00:14:12.951 --rc genhtml_legend=1 00:14:12.951 --rc geninfo_all_blocks=1 00:14:12.951 --rc geninfo_unexecuted_blocks=1 00:14:12.951 00:14:12.951 ' 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:12.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.951 --rc genhtml_branch_coverage=1 00:14:12.951 --rc genhtml_function_coverage=1 00:14:12.951 --rc genhtml_legend=1 00:14:12.951 --rc geninfo_all_blocks=1 00:14:12.951 --rc geninfo_unexecuted_blocks=1 00:14:12.951 00:14:12.951 ' 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:12.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.951 --rc genhtml_branch_coverage=1 00:14:12.951 --rc genhtml_function_coverage=1 00:14:12.951 --rc genhtml_legend=1 00:14:12.951 --rc geninfo_all_blocks=1 00:14:12.951 --rc geninfo_unexecuted_blocks=1 00:14:12.951 00:14:12.951 ' 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:12.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.951 --rc genhtml_branch_coverage=1 00:14:12.951 --rc genhtml_function_coverage=1 00:14:12.951 --rc genhtml_legend=1 00:14:12.951 --rc geninfo_all_blocks=1 00:14:12.951 --rc geninfo_unexecuted_blocks=1 00:14:12.951 00:14:12.951 ' 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:12.951 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.952 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.952 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.952 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:12.952 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.952 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:14:12.952 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:12.952 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:12.952 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:12.952 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:12.952 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:12.952 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:12.952 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:12.952 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:12.952 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:12.952 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:12.952 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:12.952 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:12.952 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:12.952 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:14:12.952 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:12.952 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:12.952 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:12.952 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:14:12.952 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:12.952 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:12.952 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:12.952 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:12.952 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:12.952 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.952 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:12.952 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:13.211 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:13.211 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:13.211 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:13.212 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:13.212 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:13.212 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:13.212 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:13.212 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:13.212 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:13.212 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:13.212 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:13.212 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:13.212 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:13.212 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:13.212 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:13.212 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:13.212 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:13.212 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:13.212 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:13.212 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:13.212 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:13.212 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:13.212 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:13.212 Cannot find device "nvmf_init_br" 00:14:13.212 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:14:13.212 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:13.212 Cannot find device "nvmf_init_br2" 00:14:13.212 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:14:13.212 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:13.212 Cannot find device "nvmf_tgt_br" 00:14:13.212 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:14:13.212 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:13.212 Cannot find device "nvmf_tgt_br2" 00:14:13.212 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:14:13.212 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:13.212 Cannot find device "nvmf_init_br" 00:14:13.212 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:14:13.212 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:13.212 Cannot find device "nvmf_init_br2" 00:14:13.212 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:14:13.212 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:13.212 Cannot find device "nvmf_tgt_br" 00:14:13.212 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:14:13.212 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:13.212 Cannot find device "nvmf_tgt_br2" 00:14:13.212 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:14:13.212 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:13.212 Cannot find device "nvmf_br" 00:14:13.212 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:14:13.212 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:13.212 Cannot find device "nvmf_init_if" 00:14:13.212 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:14:13.212 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:13.212 Cannot find device "nvmf_init_if2" 00:14:13.212 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:14:13.212 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:13.212 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:13.212 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:14:13.212 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:13.212 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:13.212 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:14:13.212 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:13.212 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:13.212 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:13.212 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:13.212 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:13.471 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:13.471 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:13.471 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:13.471 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:13.471 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:13.471 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:13.471 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:13.471 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:13.471 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:13.471 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:13.471 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:13.471 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:13.471 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:13.471 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:13.471 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:13.471 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:13.471 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:13.471 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:13.471 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:13.471 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:13.471 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:13.471 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:13.471 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:13.471 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:13.471 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:13.471 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:13.471 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:13.730 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:13.730 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:13.730 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.129 ms 00:14:13.730 00:14:13.731 --- 10.0.0.3 ping statistics --- 00:14:13.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.731 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:14:13.731 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:13.731 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:13.731 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.084 ms 00:14:13.731 00:14:13.731 --- 10.0.0.4 ping statistics --- 00:14:13.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.731 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:14:13.731 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:13.731 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:13.731 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:14:13.731 00:14:13.731 --- 10.0.0.1 ping statistics --- 00:14:13.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.731 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:14:13.731 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:13.731 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:13.731 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:14:13.731 00:14:13.731 --- 10.0.0.2 ping statistics --- 00:14:13.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.731 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:14:13.731 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:13.731 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:14:13.731 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:13.731 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:13.731 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:13.731 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:13.731 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:13.731 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:13.731 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:13.731 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:14:13.731 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:13.731 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:13.731 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.731 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=67152 00:14:13.731 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 67152 00:14:13.731 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67152 ']' 00:14:13.731 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:14:13.731 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.731 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:13.731 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.731 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:13.731 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.689 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:14.689 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:14.689 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:14.689 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:14.689 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.689 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:14.689 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:14:14.689 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=67184 00:14:14.689 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:14.689 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:14:14.689 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:14.689 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:14.689 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:14.689 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:14:14.689 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:14.689 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:14.689 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=13ce6cddc0559725b933a5374575512dbf5a34ce8f1f0d25 00:14:14.689 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:14:14.689 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.j7Q 00:14:14.689 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 13ce6cddc0559725b933a5374575512dbf5a34ce8f1f0d25 0 00:14:14.689 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 13ce6cddc0559725b933a5374575512dbf5a34ce8f1f0d25 0 00:14:14.689 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:14.689 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:14.689 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=13ce6cddc0559725b933a5374575512dbf5a34ce8f1f0d25 00:14:14.689 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:14:14.689 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:14.689 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.j7Q 00:14:14.689 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.j7Q 00:14:14.689 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.j7Q 00:14:14.689 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:14:14.689 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:14.689 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:14.689 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:14.689 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:14:14.689 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:14:14.689 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:14.689 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9dfd1129383eaba4a8d731aa8f86f10620824757d8879522e72ce5bf9a7e31fc 00:14:14.689 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:14:14.689 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.YQs 00:14:14.689 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9dfd1129383eaba4a8d731aa8f86f10620824757d8879522e72ce5bf9a7e31fc 3 00:14:14.689 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9dfd1129383eaba4a8d731aa8f86f10620824757d8879522e72ce5bf9a7e31fc 3 00:14:14.689 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:14.689 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:14.689 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9dfd1129383eaba4a8d731aa8f86f10620824757d8879522e72ce5bf9a7e31fc 00:14:14.689 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:14:14.689 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.YQs 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.YQs 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.YQs 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=853df243a2ae480c2a140c425abed1ed 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.vJ1 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 853df243a2ae480c2a140c425abed1ed 1 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 853df243a2ae480c2a140c425abed1ed 1 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=853df243a2ae480c2a140c425abed1ed 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.vJ1 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.vJ1 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.vJ1 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=02b56e12cab478ab45287cb3e0cfd846a72114f9af972ca8 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.YqO 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 02b56e12cab478ab45287cb3e0cfd846a72114f9af972ca8 2 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 02b56e12cab478ab45287cb3e0cfd846a72114f9af972ca8 2 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=02b56e12cab478ab45287cb3e0cfd846a72114f9af972ca8 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.YqO 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.YqO 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.YqO 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=496044e8c867b71dde15ba58b6f9d03c1d9e0e1c68943389 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.mXT 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 496044e8c867b71dde15ba58b6f9d03c1d9e0e1c68943389 2 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 496044e8c867b71dde15ba58b6f9d03c1d9e0e1c68943389 2 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=496044e8c867b71dde15ba58b6f9d03c1d9e0e1c68943389 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.mXT 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.mXT 00:14:14.951 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.mXT 00:14:15.211 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:14:15.211 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:15.211 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:15.211 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:15.211 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:14:15.211 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:14:15.211 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:15.211 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=26f75ca81ae4341c5bbf86b2965afeb6 00:14:15.211 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:14:15.211 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.y1E 00:14:15.211 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 26f75ca81ae4341c5bbf86b2965afeb6 1 00:14:15.211 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 26f75ca81ae4341c5bbf86b2965afeb6 1 00:14:15.211 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:15.211 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:15.211 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=26f75ca81ae4341c5bbf86b2965afeb6 00:14:15.211 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:14:15.211 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:15.211 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.y1E 00:14:15.211 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.y1E 00:14:15.211 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.y1E 00:14:15.211 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:14:15.211 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:15.211 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:15.211 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:15.211 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:14:15.211 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:14:15.211 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:15.211 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=85d434bd6a219af99f4c725ec353406f264ff2bb27761514a2553e4b0a1f2990 00:14:15.211 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:14:15.211 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.KQC 00:14:15.211 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 85d434bd6a219af99f4c725ec353406f264ff2bb27761514a2553e4b0a1f2990 3 00:14:15.211 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 85d434bd6a219af99f4c725ec353406f264ff2bb27761514a2553e4b0a1f2990 3 00:14:15.211 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:15.211 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:15.211 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=85d434bd6a219af99f4c725ec353406f264ff2bb27761514a2553e4b0a1f2990 00:14:15.211 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:14:15.211 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:15.211 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.KQC 00:14:15.212 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.KQC 00:14:15.212 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.KQC 00:14:15.212 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:14:15.212 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 67152 00:14:15.212 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67152 ']' 00:14:15.212 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.212 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:15.212 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.212 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:15.212 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.470 09:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:15.470 09:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:15.470 09:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 67184 /var/tmp/host.sock 00:14:15.470 09:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67184 ']' 00:14:15.470 09:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:14:15.470 09:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:15.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:15.470 09:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:15.470 09:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:15.470 09:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.730 09:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:15.730 09:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:15.730 09:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:14:15.730 09:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.730 09:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.730 09:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.730 09:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:15.730 09:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.j7Q 00:14:15.730 09:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.730 09:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.730 09:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.730 09:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.j7Q 00:14:15.730 09:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.j7Q 00:14:15.990 09:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.YQs ]] 00:14:15.990 09:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.YQs 00:14:15.990 09:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.990 09:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.990 09:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.990 09:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.YQs 00:14:15.990 09:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.YQs 00:14:16.249 09:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:16.249 09:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.vJ1 00:14:16.249 09:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.249 09:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.249 09:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.249 09:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.vJ1 00:14:16.249 09:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.vJ1 00:14:16.508 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.YqO ]] 00:14:16.508 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.YqO 00:14:16.508 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.508 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.508 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.508 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.YqO 00:14:16.508 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.YqO 00:14:16.767 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:16.767 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.mXT 00:14:16.767 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.767 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.767 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.767 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.mXT 00:14:16.767 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.mXT 00:14:17.026 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.y1E ]] 00:14:17.026 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.y1E 00:14:17.026 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.026 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.026 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.026 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.y1E 00:14:17.026 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.y1E 00:14:17.285 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:17.285 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.KQC 00:14:17.285 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.285 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.285 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.285 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.KQC 00:14:17.285 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.KQC 00:14:17.543 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:14:17.543 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:17.543 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:17.543 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:17.543 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:17.543 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:17.543 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:14:17.543 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:17.543 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:17.543 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:17.543 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:17.543 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:17.543 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:17.543 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.543 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.801 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.801 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:17.801 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:17.801 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:18.059 00:14:18.059 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:18.059 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:18.059 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:18.318 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:18.318 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:18.318 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.318 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.318 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.318 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:18.318 { 00:14:18.318 "cntlid": 1, 00:14:18.318 "qid": 0, 00:14:18.318 "state": "enabled", 00:14:18.318 "thread": "nvmf_tgt_poll_group_000", 00:14:18.318 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:14:18.318 "listen_address": { 00:14:18.318 "trtype": "TCP", 00:14:18.318 "adrfam": "IPv4", 00:14:18.318 "traddr": "10.0.0.3", 00:14:18.318 "trsvcid": "4420" 00:14:18.318 }, 00:14:18.318 "peer_address": { 00:14:18.318 "trtype": "TCP", 00:14:18.318 "adrfam": "IPv4", 00:14:18.318 "traddr": "10.0.0.1", 00:14:18.318 "trsvcid": "60572" 00:14:18.318 }, 00:14:18.318 "auth": { 00:14:18.318 "state": "completed", 00:14:18.318 "digest": "sha256", 00:14:18.318 "dhgroup": "null" 00:14:18.318 } 00:14:18.318 } 00:14:18.318 ]' 00:14:18.318 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:18.318 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:18.318 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:18.318 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:18.318 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:18.318 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:18.318 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:18.318 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:18.576 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTNjZTZjZGRjMDU1OTcyNWI5MzNhNTM3NDU3NTUxMmRiZjVhMzRjZThmMWYwZDI1YyatcQ==: --dhchap-ctrl-secret DHHC-1:03:OWRmZDExMjkzODNlYWJhNGE4ZDczMWFhOGY4NmYxMDYyMDgyNDc1N2Q4ODc5NTIyZTcyY2U1YmY5YTdlMzFmY4f98es=: 00:14:18.576 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:00:MTNjZTZjZGRjMDU1OTcyNWI5MzNhNTM3NDU3NTUxMmRiZjVhMzRjZThmMWYwZDI1YyatcQ==: --dhchap-ctrl-secret DHHC-1:03:OWRmZDExMjkzODNlYWJhNGE4ZDczMWFhOGY4NmYxMDYyMDgyNDc1N2Q4ODc5NTIyZTcyY2U1YmY5YTdlMzFmY4f98es=: 00:14:22.800 09:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:22.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:22.800 09:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:14:22.800 09:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.800 09:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.800 09:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.800 09:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:22.800 09:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:22.800 09:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:22.800 09:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:14:22.800 09:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:22.800 09:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:22.800 09:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:22.800 09:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:22.800 09:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:22.801 09:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:22.801 09:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.801 09:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.801 09:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.801 09:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:22.801 09:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:22.801 09:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:22.801 00:14:22.801 09:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:22.801 09:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:22.801 09:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:22.801 09:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:22.801 09:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:22.801 09:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.801 09:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.801 09:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.801 09:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:22.801 { 00:14:22.801 "cntlid": 3, 00:14:22.801 "qid": 0, 00:14:22.801 "state": "enabled", 00:14:22.801 "thread": "nvmf_tgt_poll_group_000", 00:14:22.801 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:14:22.801 "listen_address": { 00:14:22.801 "trtype": "TCP", 00:14:22.801 "adrfam": "IPv4", 00:14:22.801 "traddr": "10.0.0.3", 00:14:22.801 "trsvcid": "4420" 00:14:22.801 }, 00:14:22.801 "peer_address": { 00:14:22.801 "trtype": "TCP", 00:14:22.801 "adrfam": "IPv4", 00:14:22.801 "traddr": "10.0.0.1", 00:14:22.801 "trsvcid": "60748" 00:14:22.801 }, 00:14:22.801 "auth": { 00:14:22.801 "state": "completed", 00:14:22.801 "digest": "sha256", 00:14:22.801 "dhgroup": "null" 00:14:22.801 } 00:14:22.801 } 00:14:22.801 ]' 00:14:22.801 09:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:22.801 09:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:22.801 09:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:22.801 09:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:22.801 09:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:22.801 09:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:22.801 09:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:22.801 09:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:23.061 09:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODUzZGYyNDNhMmFlNDgwYzJhMTQwYzQyNWFiZWQxZWSsSQKw: --dhchap-ctrl-secret DHHC-1:02:MDJiNTZlMTJjYWI0NzhhYjQ1Mjg3Y2IzZTBjZmQ4NDZhNzIxMTRmOWFmOTcyY2E4986qcg==: 00:14:23.061 09:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:01:ODUzZGYyNDNhMmFlNDgwYzJhMTQwYzQyNWFiZWQxZWSsSQKw: --dhchap-ctrl-secret DHHC-1:02:MDJiNTZlMTJjYWI0NzhhYjQ1Mjg3Y2IzZTBjZmQ4NDZhNzIxMTRmOWFmOTcyY2E4986qcg==: 00:14:23.999 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:23.999 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:23.999 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:14:23.999 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.999 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.999 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.999 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:23.999 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:23.999 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:23.999 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:14:23.999 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:23.999 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:23.999 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:24.000 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:24.000 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:24.000 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:24.000 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.000 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.000 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.000 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:24.000 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:24.000 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:24.259 00:14:24.259 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:24.259 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:24.259 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:24.519 09:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:24.519 09:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:24.519 09:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.519 09:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.519 09:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.519 09:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:24.519 { 00:14:24.519 "cntlid": 5, 00:14:24.519 "qid": 0, 00:14:24.519 "state": "enabled", 00:14:24.519 "thread": "nvmf_tgt_poll_group_000", 00:14:24.519 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:14:24.519 "listen_address": { 00:14:24.519 "trtype": "TCP", 00:14:24.519 "adrfam": "IPv4", 00:14:24.519 "traddr": "10.0.0.3", 00:14:24.519 "trsvcid": "4420" 00:14:24.519 }, 00:14:24.519 "peer_address": { 00:14:24.519 "trtype": "TCP", 00:14:24.519 "adrfam": "IPv4", 00:14:24.519 "traddr": "10.0.0.1", 00:14:24.519 "trsvcid": "60772" 00:14:24.519 }, 00:14:24.519 "auth": { 00:14:24.519 "state": "completed", 00:14:24.519 "digest": "sha256", 00:14:24.519 "dhgroup": "null" 00:14:24.519 } 00:14:24.519 } 00:14:24.519 ]' 00:14:24.519 09:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:24.778 09:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:24.778 09:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:24.778 09:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:24.778 09:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:24.778 09:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:24.778 09:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:24.778 09:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:25.038 09:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDk2MDQ0ZThjODY3YjcxZGRlMTViYTU4YjZmOWQwM2MxZDllMGUxYzY4OTQzMzg5CaUkbw==: --dhchap-ctrl-secret DHHC-1:01:MjZmNzVjYTgxYWU0MzQxYzViYmY4NmIyOTY1YWZlYja6QNRW: 00:14:25.038 09:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:02:NDk2MDQ0ZThjODY3YjcxZGRlMTViYTU4YjZmOWQwM2MxZDllMGUxYzY4OTQzMzg5CaUkbw==: --dhchap-ctrl-secret DHHC-1:01:MjZmNzVjYTgxYWU0MzQxYzViYmY4NmIyOTY1YWZlYja6QNRW: 00:14:25.605 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:25.605 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:25.605 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:14:25.605 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.605 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.605 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.605 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:25.605 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:25.605 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:25.864 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:14:25.864 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:25.864 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:25.864 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:25.864 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:25.864 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:25.864 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key3 00:14:25.864 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.864 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.864 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.864 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:25.864 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:25.864 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:26.123 00:14:26.123 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:26.123 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:26.123 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:26.382 09:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:26.382 09:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:26.382 09:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.382 09:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.382 09:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.382 09:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:26.382 { 00:14:26.382 "cntlid": 7, 00:14:26.382 "qid": 0, 00:14:26.382 "state": "enabled", 00:14:26.382 "thread": "nvmf_tgt_poll_group_000", 00:14:26.382 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:14:26.382 "listen_address": { 00:14:26.382 "trtype": "TCP", 00:14:26.382 "adrfam": "IPv4", 00:14:26.382 "traddr": "10.0.0.3", 00:14:26.382 "trsvcid": "4420" 00:14:26.382 }, 00:14:26.382 "peer_address": { 00:14:26.382 "trtype": "TCP", 00:14:26.382 "adrfam": "IPv4", 00:14:26.382 "traddr": "10.0.0.1", 00:14:26.382 "trsvcid": "60796" 00:14:26.382 }, 00:14:26.382 "auth": { 00:14:26.382 "state": "completed", 00:14:26.382 "digest": "sha256", 00:14:26.382 "dhgroup": "null" 00:14:26.382 } 00:14:26.382 } 00:14:26.382 ]' 00:14:26.382 09:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:26.641 09:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:26.641 09:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:26.641 09:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:26.641 09:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:26.641 09:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:26.641 09:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:26.641 09:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:26.899 09:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODVkNDM0YmQ2YTIxOWFmOTlmNGM3MjVlYzM1MzQwNmYyNjRmZjJiYjI3NzYxNTE0YTI1NTNlNGIwYTFmMjk5MDSC2qw=: 00:14:26.899 09:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:03:ODVkNDM0YmQ2YTIxOWFmOTlmNGM3MjVlYzM1MzQwNmYyNjRmZjJiYjI3NzYxNTE0YTI1NTNlNGIwYTFmMjk5MDSC2qw=: 00:14:27.466 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:27.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:27.466 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:14:27.466 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.466 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.466 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.466 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:27.466 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:27.466 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:27.466 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:27.724 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:14:27.724 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:27.724 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:27.724 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:27.724 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:27.724 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:27.724 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:27.724 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.724 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.724 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.724 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:27.724 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:27.724 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:27.982 00:14:27.982 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:27.982 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:27.982 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:28.241 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:28.241 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:28.241 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.241 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.241 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.241 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:28.241 { 00:14:28.241 "cntlid": 9, 00:14:28.241 "qid": 0, 00:14:28.241 "state": "enabled", 00:14:28.241 "thread": "nvmf_tgt_poll_group_000", 00:14:28.241 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:14:28.241 "listen_address": { 00:14:28.241 "trtype": "TCP", 00:14:28.241 "adrfam": "IPv4", 00:14:28.241 "traddr": "10.0.0.3", 00:14:28.241 "trsvcid": "4420" 00:14:28.241 }, 00:14:28.241 "peer_address": { 00:14:28.241 "trtype": "TCP", 00:14:28.241 "adrfam": "IPv4", 00:14:28.241 "traddr": "10.0.0.1", 00:14:28.241 "trsvcid": "60814" 00:14:28.241 }, 00:14:28.241 "auth": { 00:14:28.241 "state": "completed", 00:14:28.241 "digest": "sha256", 00:14:28.241 "dhgroup": "ffdhe2048" 00:14:28.241 } 00:14:28.241 } 00:14:28.241 ]' 00:14:28.241 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:28.241 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:28.241 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:28.530 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:28.530 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:28.530 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:28.530 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:28.530 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:28.790 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTNjZTZjZGRjMDU1OTcyNWI5MzNhNTM3NDU3NTUxMmRiZjVhMzRjZThmMWYwZDI1YyatcQ==: --dhchap-ctrl-secret DHHC-1:03:OWRmZDExMjkzODNlYWJhNGE4ZDczMWFhOGY4NmYxMDYyMDgyNDc1N2Q4ODc5NTIyZTcyY2U1YmY5YTdlMzFmY4f98es=: 00:14:28.790 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:00:MTNjZTZjZGRjMDU1OTcyNWI5MzNhNTM3NDU3NTUxMmRiZjVhMzRjZThmMWYwZDI1YyatcQ==: --dhchap-ctrl-secret DHHC-1:03:OWRmZDExMjkzODNlYWJhNGE4ZDczMWFhOGY4NmYxMDYyMDgyNDc1N2Q4ODc5NTIyZTcyY2U1YmY5YTdlMzFmY4f98es=: 00:14:29.357 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:29.357 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:29.357 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:14:29.357 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.357 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.357 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.357 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:29.357 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:29.357 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:29.614 09:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:14:29.614 09:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:29.614 09:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:29.614 09:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:29.614 09:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:29.614 09:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:29.614 09:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:29.614 09:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.614 09:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.614 09:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.614 09:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:29.614 09:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:29.614 09:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:29.871 00:14:29.871 09:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:29.871 09:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:29.871 09:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:30.130 09:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:30.130 09:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:30.130 09:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.130 09:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.130 09:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.130 09:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:30.130 { 00:14:30.130 "cntlid": 11, 00:14:30.130 "qid": 0, 00:14:30.130 "state": "enabled", 00:14:30.130 "thread": "nvmf_tgt_poll_group_000", 00:14:30.130 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:14:30.130 "listen_address": { 00:14:30.130 "trtype": "TCP", 00:14:30.130 "adrfam": "IPv4", 00:14:30.130 "traddr": "10.0.0.3", 00:14:30.130 "trsvcid": "4420" 00:14:30.130 }, 00:14:30.130 "peer_address": { 00:14:30.130 "trtype": "TCP", 00:14:30.130 "adrfam": "IPv4", 00:14:30.130 "traddr": "10.0.0.1", 00:14:30.130 "trsvcid": "60846" 00:14:30.130 }, 00:14:30.130 "auth": { 00:14:30.130 "state": "completed", 00:14:30.130 "digest": "sha256", 00:14:30.130 "dhgroup": "ffdhe2048" 00:14:30.130 } 00:14:30.130 } 00:14:30.130 ]' 00:14:30.130 09:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:30.388 09:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:30.388 09:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:30.388 09:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:30.388 09:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:30.388 09:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:30.388 09:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:30.388 09:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:30.646 09:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODUzZGYyNDNhMmFlNDgwYzJhMTQwYzQyNWFiZWQxZWSsSQKw: --dhchap-ctrl-secret DHHC-1:02:MDJiNTZlMTJjYWI0NzhhYjQ1Mjg3Y2IzZTBjZmQ4NDZhNzIxMTRmOWFmOTcyY2E4986qcg==: 00:14:30.646 09:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:01:ODUzZGYyNDNhMmFlNDgwYzJhMTQwYzQyNWFiZWQxZWSsSQKw: --dhchap-ctrl-secret DHHC-1:02:MDJiNTZlMTJjYWI0NzhhYjQ1Mjg3Y2IzZTBjZmQ4NDZhNzIxMTRmOWFmOTcyY2E4986qcg==: 00:14:31.213 09:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:31.213 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:31.213 09:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:14:31.213 09:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.213 09:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.213 09:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.213 09:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:31.213 09:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:31.213 09:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:31.472 09:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:14:31.472 09:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:31.472 09:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:31.472 09:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:31.472 09:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:31.472 09:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:31.472 09:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:31.472 09:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.472 09:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.472 09:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.472 09:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:31.472 09:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:31.472 09:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:32.041 00:14:32.041 09:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:32.041 09:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:32.041 09:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:32.041 09:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:32.041 09:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:32.041 09:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.041 09:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.041 09:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.041 09:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:32.041 { 00:14:32.041 "cntlid": 13, 00:14:32.041 "qid": 0, 00:14:32.041 "state": "enabled", 00:14:32.041 "thread": "nvmf_tgt_poll_group_000", 00:14:32.041 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:14:32.041 "listen_address": { 00:14:32.041 "trtype": "TCP", 00:14:32.041 "adrfam": "IPv4", 00:14:32.041 "traddr": "10.0.0.3", 00:14:32.041 "trsvcid": "4420" 00:14:32.041 }, 00:14:32.041 "peer_address": { 00:14:32.041 "trtype": "TCP", 00:14:32.041 "adrfam": "IPv4", 00:14:32.041 "traddr": "10.0.0.1", 00:14:32.041 "trsvcid": "45568" 00:14:32.041 }, 00:14:32.041 "auth": { 00:14:32.041 "state": "completed", 00:14:32.041 "digest": "sha256", 00:14:32.041 "dhgroup": "ffdhe2048" 00:14:32.041 } 00:14:32.041 } 00:14:32.041 ]' 00:14:32.041 09:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:32.300 09:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:32.300 09:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:32.300 09:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:32.300 09:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:32.300 09:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:32.300 09:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:32.300 09:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:32.559 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDk2MDQ0ZThjODY3YjcxZGRlMTViYTU4YjZmOWQwM2MxZDllMGUxYzY4OTQzMzg5CaUkbw==: --dhchap-ctrl-secret DHHC-1:01:MjZmNzVjYTgxYWU0MzQxYzViYmY4NmIyOTY1YWZlYja6QNRW: 00:14:32.559 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:02:NDk2MDQ0ZThjODY3YjcxZGRlMTViYTU4YjZmOWQwM2MxZDllMGUxYzY4OTQzMzg5CaUkbw==: --dhchap-ctrl-secret DHHC-1:01:MjZmNzVjYTgxYWU0MzQxYzViYmY4NmIyOTY1YWZlYja6QNRW: 00:14:33.128 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:33.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:33.128 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:14:33.128 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.128 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.128 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.128 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:33.128 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:33.128 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:33.387 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:14:33.387 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:33.387 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:33.387 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:33.387 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:33.387 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:33.387 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key3 00:14:33.387 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.387 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.387 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.387 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:33.387 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:33.387 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:33.645 00:14:33.645 09:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:33.645 09:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:33.645 09:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:33.904 09:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:33.904 09:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:33.904 09:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.904 09:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.904 09:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.904 09:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:33.904 { 00:14:33.904 "cntlid": 15, 00:14:33.904 "qid": 0, 00:14:33.904 "state": "enabled", 00:14:33.904 "thread": "nvmf_tgt_poll_group_000", 00:14:33.904 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:14:33.904 "listen_address": { 00:14:33.904 "trtype": "TCP", 00:14:33.904 "adrfam": "IPv4", 00:14:33.904 "traddr": "10.0.0.3", 00:14:33.904 "trsvcid": "4420" 00:14:33.904 }, 00:14:33.904 "peer_address": { 00:14:33.904 "trtype": "TCP", 00:14:33.904 "adrfam": "IPv4", 00:14:33.904 "traddr": "10.0.0.1", 00:14:33.904 "trsvcid": "45596" 00:14:33.904 }, 00:14:33.904 "auth": { 00:14:33.904 "state": "completed", 00:14:33.904 "digest": "sha256", 00:14:33.904 "dhgroup": "ffdhe2048" 00:14:33.904 } 00:14:33.904 } 00:14:33.904 ]' 00:14:33.904 09:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:33.904 09:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:33.904 09:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:33.904 09:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:33.904 09:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:33.904 09:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:33.904 09:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:33.904 09:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:34.164 09:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODVkNDM0YmQ2YTIxOWFmOTlmNGM3MjVlYzM1MzQwNmYyNjRmZjJiYjI3NzYxNTE0YTI1NTNlNGIwYTFmMjk5MDSC2qw=: 00:14:34.164 09:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:03:ODVkNDM0YmQ2YTIxOWFmOTlmNGM3MjVlYzM1MzQwNmYyNjRmZjJiYjI3NzYxNTE0YTI1NTNlNGIwYTFmMjk5MDSC2qw=: 00:14:34.732 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:34.732 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:34.732 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:14:34.732 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.732 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.732 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.732 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:34.732 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:34.732 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:34.732 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:35.052 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:14:35.052 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:35.052 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:35.052 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:35.052 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:35.052 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:35.052 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:35.052 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.052 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.052 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.052 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:35.052 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:35.052 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:35.312 00:14:35.312 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:35.312 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:35.312 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:35.572 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:35.572 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:35.572 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.572 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.573 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.573 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:35.573 { 00:14:35.573 "cntlid": 17, 00:14:35.573 "qid": 0, 00:14:35.573 "state": "enabled", 00:14:35.573 "thread": "nvmf_tgt_poll_group_000", 00:14:35.573 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:14:35.573 "listen_address": { 00:14:35.573 "trtype": "TCP", 00:14:35.573 "adrfam": "IPv4", 00:14:35.573 "traddr": "10.0.0.3", 00:14:35.573 "trsvcid": "4420" 00:14:35.573 }, 00:14:35.573 "peer_address": { 00:14:35.573 "trtype": "TCP", 00:14:35.573 "adrfam": "IPv4", 00:14:35.573 "traddr": "10.0.0.1", 00:14:35.573 "trsvcid": "45636" 00:14:35.573 }, 00:14:35.573 "auth": { 00:14:35.573 "state": "completed", 00:14:35.573 "digest": "sha256", 00:14:35.573 "dhgroup": "ffdhe3072" 00:14:35.573 } 00:14:35.573 } 00:14:35.573 ]' 00:14:35.573 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:35.573 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:35.573 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:35.573 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:35.573 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:35.573 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:35.573 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:35.573 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:35.832 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTNjZTZjZGRjMDU1OTcyNWI5MzNhNTM3NDU3NTUxMmRiZjVhMzRjZThmMWYwZDI1YyatcQ==: --dhchap-ctrl-secret DHHC-1:03:OWRmZDExMjkzODNlYWJhNGE4ZDczMWFhOGY4NmYxMDYyMDgyNDc1N2Q4ODc5NTIyZTcyY2U1YmY5YTdlMzFmY4f98es=: 00:14:35.832 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:00:MTNjZTZjZGRjMDU1OTcyNWI5MzNhNTM3NDU3NTUxMmRiZjVhMzRjZThmMWYwZDI1YyatcQ==: --dhchap-ctrl-secret DHHC-1:03:OWRmZDExMjkzODNlYWJhNGE4ZDczMWFhOGY4NmYxMDYyMDgyNDc1N2Q4ODc5NTIyZTcyY2U1YmY5YTdlMzFmY4f98es=: 00:14:36.400 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:36.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:36.400 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:14:36.400 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.400 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.400 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.400 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:36.400 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:36.400 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:36.659 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:14:36.659 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:36.659 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:36.659 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:36.659 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:36.659 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:36.659 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:36.659 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.659 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.659 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.659 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:36.659 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:36.659 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:36.918 00:14:36.918 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:36.918 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:36.918 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:37.177 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:37.177 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:37.177 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.177 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.177 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.177 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:37.177 { 00:14:37.177 "cntlid": 19, 00:14:37.177 "qid": 0, 00:14:37.177 "state": "enabled", 00:14:37.177 "thread": "nvmf_tgt_poll_group_000", 00:14:37.177 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:14:37.177 "listen_address": { 00:14:37.177 "trtype": "TCP", 00:14:37.177 "adrfam": "IPv4", 00:14:37.177 "traddr": "10.0.0.3", 00:14:37.177 "trsvcid": "4420" 00:14:37.177 }, 00:14:37.177 "peer_address": { 00:14:37.177 "trtype": "TCP", 00:14:37.177 "adrfam": "IPv4", 00:14:37.177 "traddr": "10.0.0.1", 00:14:37.177 "trsvcid": "45674" 00:14:37.177 }, 00:14:37.177 "auth": { 00:14:37.177 "state": "completed", 00:14:37.177 "digest": "sha256", 00:14:37.177 "dhgroup": "ffdhe3072" 00:14:37.177 } 00:14:37.177 } 00:14:37.177 ]' 00:14:37.177 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:37.177 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:37.177 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:37.436 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:37.436 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:37.436 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:37.436 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:37.436 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.694 09:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODUzZGYyNDNhMmFlNDgwYzJhMTQwYzQyNWFiZWQxZWSsSQKw: --dhchap-ctrl-secret DHHC-1:02:MDJiNTZlMTJjYWI0NzhhYjQ1Mjg3Y2IzZTBjZmQ4NDZhNzIxMTRmOWFmOTcyY2E4986qcg==: 00:14:37.694 09:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:01:ODUzZGYyNDNhMmFlNDgwYzJhMTQwYzQyNWFiZWQxZWSsSQKw: --dhchap-ctrl-secret DHHC-1:02:MDJiNTZlMTJjYWI0NzhhYjQ1Mjg3Y2IzZTBjZmQ4NDZhNzIxMTRmOWFmOTcyY2E4986qcg==: 00:14:38.271 09:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:38.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:38.271 09:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:14:38.271 09:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.271 09:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.271 09:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.271 09:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:38.271 09:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:38.271 09:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:38.271 09:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:14:38.271 09:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:38.271 09:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:38.271 09:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:38.271 09:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:38.271 09:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:38.271 09:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:38.271 09:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.271 09:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.271 09:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.271 09:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:38.271 09:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:38.272 09:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:38.838 00:14:38.838 09:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:38.838 09:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:38.838 09:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:38.838 09:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:38.838 09:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:38.838 09:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.838 09:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.838 09:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.838 09:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:38.838 { 00:14:38.838 "cntlid": 21, 00:14:38.838 "qid": 0, 00:14:38.838 "state": "enabled", 00:14:38.838 "thread": "nvmf_tgt_poll_group_000", 00:14:38.838 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:14:38.838 "listen_address": { 00:14:38.838 "trtype": "TCP", 00:14:38.838 "adrfam": "IPv4", 00:14:38.838 "traddr": "10.0.0.3", 00:14:38.838 "trsvcid": "4420" 00:14:38.838 }, 00:14:38.838 "peer_address": { 00:14:38.838 "trtype": "TCP", 00:14:38.838 "adrfam": "IPv4", 00:14:38.838 "traddr": "10.0.0.1", 00:14:38.838 "trsvcid": "45698" 00:14:38.838 }, 00:14:38.838 "auth": { 00:14:38.838 "state": "completed", 00:14:38.838 "digest": "sha256", 00:14:38.838 "dhgroup": "ffdhe3072" 00:14:38.838 } 00:14:38.838 } 00:14:38.838 ]' 00:14:38.838 09:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:39.096 09:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:39.096 09:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:39.096 09:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:39.096 09:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:39.096 09:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:39.096 09:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:39.096 09:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:39.354 09:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDk2MDQ0ZThjODY3YjcxZGRlMTViYTU4YjZmOWQwM2MxZDllMGUxYzY4OTQzMzg5CaUkbw==: --dhchap-ctrl-secret DHHC-1:01:MjZmNzVjYTgxYWU0MzQxYzViYmY4NmIyOTY1YWZlYja6QNRW: 00:14:39.354 09:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:02:NDk2MDQ0ZThjODY3YjcxZGRlMTViYTU4YjZmOWQwM2MxZDllMGUxYzY4OTQzMzg5CaUkbw==: --dhchap-ctrl-secret DHHC-1:01:MjZmNzVjYTgxYWU0MzQxYzViYmY4NmIyOTY1YWZlYja6QNRW: 00:14:39.921 09:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:39.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:39.921 09:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:14:39.921 09:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.921 09:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.921 09:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.921 09:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:39.921 09:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:39.921 09:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:40.180 09:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:14:40.180 09:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:40.180 09:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:40.180 09:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:40.180 09:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:40.180 09:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:40.180 09:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key3 00:14:40.180 09:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.180 09:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.180 09:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.180 09:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:40.180 09:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:40.180 09:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:40.440 00:14:40.440 09:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:40.440 09:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:40.440 09:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:40.699 09:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:40.699 09:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:40.699 09:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.699 09:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.699 09:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.699 09:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:40.699 { 00:14:40.699 "cntlid": 23, 00:14:40.699 "qid": 0, 00:14:40.699 "state": "enabled", 00:14:40.699 "thread": "nvmf_tgt_poll_group_000", 00:14:40.700 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:14:40.700 "listen_address": { 00:14:40.700 "trtype": "TCP", 00:14:40.700 "adrfam": "IPv4", 00:14:40.700 "traddr": "10.0.0.3", 00:14:40.700 "trsvcid": "4420" 00:14:40.700 }, 00:14:40.700 "peer_address": { 00:14:40.700 "trtype": "TCP", 00:14:40.700 "adrfam": "IPv4", 00:14:40.700 "traddr": "10.0.0.1", 00:14:40.700 "trsvcid": "45710" 00:14:40.700 }, 00:14:40.700 "auth": { 00:14:40.700 "state": "completed", 00:14:40.700 "digest": "sha256", 00:14:40.700 "dhgroup": "ffdhe3072" 00:14:40.700 } 00:14:40.700 } 00:14:40.700 ]' 00:14:40.700 09:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:40.700 09:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:40.700 09:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:40.700 09:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:40.700 09:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:40.959 09:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:40.960 09:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:40.960 09:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:41.297 09:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODVkNDM0YmQ2YTIxOWFmOTlmNGM3MjVlYzM1MzQwNmYyNjRmZjJiYjI3NzYxNTE0YTI1NTNlNGIwYTFmMjk5MDSC2qw=: 00:14:41.297 09:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:03:ODVkNDM0YmQ2YTIxOWFmOTlmNGM3MjVlYzM1MzQwNmYyNjRmZjJiYjI3NzYxNTE0YTI1NTNlNGIwYTFmMjk5MDSC2qw=: 00:14:41.883 09:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:41.883 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:41.883 09:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:14:41.883 09:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.883 09:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.883 09:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.883 09:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:41.883 09:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:41.883 09:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:41.883 09:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:42.142 09:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:14:42.142 09:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:42.142 09:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:42.142 09:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:42.142 09:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:42.142 09:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:42.142 09:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:42.142 09:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.142 09:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.142 09:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.142 09:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:42.142 09:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:42.142 09:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:42.402 00:14:42.402 09:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:42.402 09:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.402 09:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:42.661 09:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:42.661 09:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:42.661 09:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.661 09:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.661 09:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.661 09:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:42.661 { 00:14:42.661 "cntlid": 25, 00:14:42.661 "qid": 0, 00:14:42.661 "state": "enabled", 00:14:42.661 "thread": "nvmf_tgt_poll_group_000", 00:14:42.661 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:14:42.661 "listen_address": { 00:14:42.661 "trtype": "TCP", 00:14:42.661 "adrfam": "IPv4", 00:14:42.661 "traddr": "10.0.0.3", 00:14:42.661 "trsvcid": "4420" 00:14:42.661 }, 00:14:42.661 "peer_address": { 00:14:42.661 "trtype": "TCP", 00:14:42.661 "adrfam": "IPv4", 00:14:42.661 "traddr": "10.0.0.1", 00:14:42.661 "trsvcid": "48674" 00:14:42.661 }, 00:14:42.661 "auth": { 00:14:42.661 "state": "completed", 00:14:42.661 "digest": "sha256", 00:14:42.661 "dhgroup": "ffdhe4096" 00:14:42.661 } 00:14:42.661 } 00:14:42.661 ]' 00:14:42.661 09:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:42.661 09:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:42.661 09:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:42.661 09:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:42.661 09:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:42.920 09:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:42.920 09:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:42.920 09:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.920 09:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTNjZTZjZGRjMDU1OTcyNWI5MzNhNTM3NDU3NTUxMmRiZjVhMzRjZThmMWYwZDI1YyatcQ==: --dhchap-ctrl-secret DHHC-1:03:OWRmZDExMjkzODNlYWJhNGE4ZDczMWFhOGY4NmYxMDYyMDgyNDc1N2Q4ODc5NTIyZTcyY2U1YmY5YTdlMzFmY4f98es=: 00:14:42.920 09:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:00:MTNjZTZjZGRjMDU1OTcyNWI5MzNhNTM3NDU3NTUxMmRiZjVhMzRjZThmMWYwZDI1YyatcQ==: --dhchap-ctrl-secret DHHC-1:03:OWRmZDExMjkzODNlYWJhNGE4ZDczMWFhOGY4NmYxMDYyMDgyNDc1N2Q4ODc5NTIyZTcyY2U1YmY5YTdlMzFmY4f98es=: 00:14:43.856 09:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.856 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.856 09:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:14:43.856 09:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.856 09:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.856 09:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.856 09:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:43.856 09:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:43.856 09:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:43.856 09:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:14:43.856 09:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:43.856 09:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:43.856 09:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:43.856 09:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:43.856 09:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.856 09:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:43.856 09:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.856 09:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.856 09:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.856 09:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:43.856 09:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:43.856 09:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:44.423 00:14:44.423 09:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:44.423 09:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:44.423 09:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:44.682 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.682 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:44.683 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.683 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.683 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.683 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:44.683 { 00:14:44.683 "cntlid": 27, 00:14:44.683 "qid": 0, 00:14:44.683 "state": "enabled", 00:14:44.683 "thread": "nvmf_tgt_poll_group_000", 00:14:44.683 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:14:44.683 "listen_address": { 00:14:44.683 "trtype": "TCP", 00:14:44.683 "adrfam": "IPv4", 00:14:44.683 "traddr": "10.0.0.3", 00:14:44.683 "trsvcid": "4420" 00:14:44.683 }, 00:14:44.683 "peer_address": { 00:14:44.683 "trtype": "TCP", 00:14:44.683 "adrfam": "IPv4", 00:14:44.683 "traddr": "10.0.0.1", 00:14:44.683 "trsvcid": "48702" 00:14:44.683 }, 00:14:44.683 "auth": { 00:14:44.683 "state": "completed", 00:14:44.683 "digest": "sha256", 00:14:44.683 "dhgroup": "ffdhe4096" 00:14:44.683 } 00:14:44.683 } 00:14:44.683 ]' 00:14:44.683 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:44.683 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:44.683 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:44.683 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:44.683 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:44.683 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:44.683 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:44.683 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:44.940 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODUzZGYyNDNhMmFlNDgwYzJhMTQwYzQyNWFiZWQxZWSsSQKw: --dhchap-ctrl-secret DHHC-1:02:MDJiNTZlMTJjYWI0NzhhYjQ1Mjg3Y2IzZTBjZmQ4NDZhNzIxMTRmOWFmOTcyY2E4986qcg==: 00:14:44.940 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:01:ODUzZGYyNDNhMmFlNDgwYzJhMTQwYzQyNWFiZWQxZWSsSQKw: --dhchap-ctrl-secret DHHC-1:02:MDJiNTZlMTJjYWI0NzhhYjQ1Mjg3Y2IzZTBjZmQ4NDZhNzIxMTRmOWFmOTcyY2E4986qcg==: 00:14:45.506 09:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:45.764 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:45.764 09:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:14:45.764 09:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.764 09:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.764 09:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.764 09:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:45.764 09:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:45.764 09:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:45.764 09:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:14:45.764 09:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:45.764 09:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:45.764 09:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:45.764 09:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:45.764 09:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:45.764 09:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:45.764 09:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.764 09:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.764 09:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.764 09:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:45.764 09:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:45.764 09:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:46.332 00:14:46.332 09:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:46.332 09:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:46.332 09:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:46.333 09:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.333 09:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:46.333 09:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.333 09:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.333 09:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.333 09:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:46.333 { 00:14:46.333 "cntlid": 29, 00:14:46.333 "qid": 0, 00:14:46.333 "state": "enabled", 00:14:46.333 "thread": "nvmf_tgt_poll_group_000", 00:14:46.333 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:14:46.333 "listen_address": { 00:14:46.333 "trtype": "TCP", 00:14:46.333 "adrfam": "IPv4", 00:14:46.333 "traddr": "10.0.0.3", 00:14:46.333 "trsvcid": "4420" 00:14:46.333 }, 00:14:46.333 "peer_address": { 00:14:46.333 "trtype": "TCP", 00:14:46.333 "adrfam": "IPv4", 00:14:46.333 "traddr": "10.0.0.1", 00:14:46.333 "trsvcid": "48732" 00:14:46.333 }, 00:14:46.333 "auth": { 00:14:46.333 "state": "completed", 00:14:46.333 "digest": "sha256", 00:14:46.333 "dhgroup": "ffdhe4096" 00:14:46.333 } 00:14:46.333 } 00:14:46.333 ]' 00:14:46.626 09:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:46.626 09:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:46.626 09:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:46.626 09:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:46.626 09:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:46.626 09:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:46.626 09:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:46.626 09:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:46.884 09:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDk2MDQ0ZThjODY3YjcxZGRlMTViYTU4YjZmOWQwM2MxZDllMGUxYzY4OTQzMzg5CaUkbw==: --dhchap-ctrl-secret DHHC-1:01:MjZmNzVjYTgxYWU0MzQxYzViYmY4NmIyOTY1YWZlYja6QNRW: 00:14:46.884 09:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:02:NDk2MDQ0ZThjODY3YjcxZGRlMTViYTU4YjZmOWQwM2MxZDllMGUxYzY4OTQzMzg5CaUkbw==: --dhchap-ctrl-secret DHHC-1:01:MjZmNzVjYTgxYWU0MzQxYzViYmY4NmIyOTY1YWZlYja6QNRW: 00:14:47.461 09:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:47.461 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:47.461 09:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:14:47.461 09:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.461 09:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.461 09:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.461 09:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:47.461 09:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:47.461 09:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:47.724 09:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:14:47.724 09:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:47.724 09:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:47.724 09:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:47.724 09:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:47.724 09:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:47.724 09:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key3 00:14:47.724 09:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.724 09:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.724 09:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.724 09:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:47.724 09:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:47.724 09:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:47.981 00:14:47.981 09:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:47.981 09:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:47.982 09:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:48.238 09:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:48.238 09:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:48.238 09:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.238 09:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.238 09:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.238 09:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:48.238 { 00:14:48.238 "cntlid": 31, 00:14:48.238 "qid": 0, 00:14:48.238 "state": "enabled", 00:14:48.238 "thread": "nvmf_tgt_poll_group_000", 00:14:48.238 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:14:48.238 "listen_address": { 00:14:48.238 "trtype": "TCP", 00:14:48.238 "adrfam": "IPv4", 00:14:48.238 "traddr": "10.0.0.3", 00:14:48.238 "trsvcid": "4420" 00:14:48.238 }, 00:14:48.238 "peer_address": { 00:14:48.238 "trtype": "TCP", 00:14:48.238 "adrfam": "IPv4", 00:14:48.238 "traddr": "10.0.0.1", 00:14:48.238 "trsvcid": "48768" 00:14:48.238 }, 00:14:48.238 "auth": { 00:14:48.238 "state": "completed", 00:14:48.238 "digest": "sha256", 00:14:48.238 "dhgroup": "ffdhe4096" 00:14:48.238 } 00:14:48.238 } 00:14:48.238 ]' 00:14:48.238 09:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:48.495 09:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:48.495 09:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:48.495 09:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:48.495 09:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:48.495 09:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:48.495 09:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:48.495 09:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:48.753 09:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODVkNDM0YmQ2YTIxOWFmOTlmNGM3MjVlYzM1MzQwNmYyNjRmZjJiYjI3NzYxNTE0YTI1NTNlNGIwYTFmMjk5MDSC2qw=: 00:14:48.753 09:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:03:ODVkNDM0YmQ2YTIxOWFmOTlmNGM3MjVlYzM1MzQwNmYyNjRmZjJiYjI3NzYxNTE0YTI1NTNlNGIwYTFmMjk5MDSC2qw=: 00:14:49.321 09:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:49.321 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:49.322 09:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:14:49.322 09:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.322 09:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.322 09:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.322 09:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:49.322 09:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:49.322 09:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:49.322 09:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:49.581 09:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:14:49.581 09:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:49.581 09:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:49.581 09:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:49.581 09:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:49.581 09:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:49.581 09:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:49.581 09:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.581 09:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.581 09:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.581 09:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:49.581 09:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:49.581 09:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:50.149 00:14:50.149 09:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:50.149 09:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:50.149 09:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.408 09:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.408 09:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:50.408 09:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.408 09:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.408 09:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.408 09:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:50.408 { 00:14:50.408 "cntlid": 33, 00:14:50.408 "qid": 0, 00:14:50.408 "state": "enabled", 00:14:50.408 "thread": "nvmf_tgt_poll_group_000", 00:14:50.408 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:14:50.408 "listen_address": { 00:14:50.408 "trtype": "TCP", 00:14:50.408 "adrfam": "IPv4", 00:14:50.408 "traddr": "10.0.0.3", 00:14:50.408 "trsvcid": "4420" 00:14:50.408 }, 00:14:50.408 "peer_address": { 00:14:50.408 "trtype": "TCP", 00:14:50.408 "adrfam": "IPv4", 00:14:50.408 "traddr": "10.0.0.1", 00:14:50.408 "trsvcid": "48792" 00:14:50.408 }, 00:14:50.408 "auth": { 00:14:50.408 "state": "completed", 00:14:50.408 "digest": "sha256", 00:14:50.408 "dhgroup": "ffdhe6144" 00:14:50.408 } 00:14:50.408 } 00:14:50.408 ]' 00:14:50.408 09:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:50.408 09:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:50.408 09:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:50.408 09:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:50.408 09:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:50.408 09:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:50.408 09:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:50.409 09:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:50.668 09:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTNjZTZjZGRjMDU1OTcyNWI5MzNhNTM3NDU3NTUxMmRiZjVhMzRjZThmMWYwZDI1YyatcQ==: --dhchap-ctrl-secret DHHC-1:03:OWRmZDExMjkzODNlYWJhNGE4ZDczMWFhOGY4NmYxMDYyMDgyNDc1N2Q4ODc5NTIyZTcyY2U1YmY5YTdlMzFmY4f98es=: 00:14:50.668 09:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:00:MTNjZTZjZGRjMDU1OTcyNWI5MzNhNTM3NDU3NTUxMmRiZjVhMzRjZThmMWYwZDI1YyatcQ==: --dhchap-ctrl-secret DHHC-1:03:OWRmZDExMjkzODNlYWJhNGE4ZDczMWFhOGY4NmYxMDYyMDgyNDc1N2Q4ODc5NTIyZTcyY2U1YmY5YTdlMzFmY4f98es=: 00:14:51.236 09:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:51.236 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.236 09:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:14:51.236 09:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.236 09:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.496 09:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.496 09:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:51.496 09:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:51.496 09:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:51.496 09:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:14:51.496 09:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:51.496 09:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:51.496 09:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:51.496 09:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:51.496 09:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:51.496 09:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:51.496 09:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.496 09:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.496 09:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.496 09:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:51.496 09:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:51.496 09:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:52.064 00:14:52.064 09:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:52.064 09:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:52.064 09:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:52.324 09:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:52.324 09:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:52.324 09:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.324 09:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.324 09:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.324 09:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:52.324 { 00:14:52.324 "cntlid": 35, 00:14:52.324 "qid": 0, 00:14:52.324 "state": "enabled", 00:14:52.324 "thread": "nvmf_tgt_poll_group_000", 00:14:52.324 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:14:52.324 "listen_address": { 00:14:52.324 "trtype": "TCP", 00:14:52.324 "adrfam": "IPv4", 00:14:52.324 "traddr": "10.0.0.3", 00:14:52.324 "trsvcid": "4420" 00:14:52.324 }, 00:14:52.324 "peer_address": { 00:14:52.324 "trtype": "TCP", 00:14:52.324 "adrfam": "IPv4", 00:14:52.324 "traddr": "10.0.0.1", 00:14:52.324 "trsvcid": "33796" 00:14:52.324 }, 00:14:52.324 "auth": { 00:14:52.324 "state": "completed", 00:14:52.324 "digest": "sha256", 00:14:52.324 "dhgroup": "ffdhe6144" 00:14:52.324 } 00:14:52.324 } 00:14:52.324 ]' 00:14:52.324 09:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:52.324 09:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:52.324 09:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:52.324 09:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:52.324 09:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:52.324 09:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:52.324 09:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:52.324 09:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:52.892 09:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODUzZGYyNDNhMmFlNDgwYzJhMTQwYzQyNWFiZWQxZWSsSQKw: --dhchap-ctrl-secret DHHC-1:02:MDJiNTZlMTJjYWI0NzhhYjQ1Mjg3Y2IzZTBjZmQ4NDZhNzIxMTRmOWFmOTcyY2E4986qcg==: 00:14:52.892 09:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:01:ODUzZGYyNDNhMmFlNDgwYzJhMTQwYzQyNWFiZWQxZWSsSQKw: --dhchap-ctrl-secret DHHC-1:02:MDJiNTZlMTJjYWI0NzhhYjQ1Mjg3Y2IzZTBjZmQ4NDZhNzIxMTRmOWFmOTcyY2E4986qcg==: 00:14:53.460 09:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:53.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:53.460 09:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:14:53.460 09:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.460 09:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.460 09:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.460 09:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:53.460 09:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:53.460 09:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:53.725 09:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:14:53.725 09:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:53.725 09:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:53.725 09:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:53.725 09:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:53.725 09:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:53.725 09:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:53.725 09:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.725 09:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.725 09:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.725 09:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:53.725 09:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:53.726 09:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:53.984 00:14:53.984 09:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:53.984 09:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:53.984 09:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:54.244 09:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:54.244 09:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:54.244 09:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.244 09:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.244 09:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.244 09:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:54.244 { 00:14:54.244 "cntlid": 37, 00:14:54.244 "qid": 0, 00:14:54.244 "state": "enabled", 00:14:54.244 "thread": "nvmf_tgt_poll_group_000", 00:14:54.244 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:14:54.244 "listen_address": { 00:14:54.244 "trtype": "TCP", 00:14:54.244 "adrfam": "IPv4", 00:14:54.244 "traddr": "10.0.0.3", 00:14:54.244 "trsvcid": "4420" 00:14:54.244 }, 00:14:54.244 "peer_address": { 00:14:54.244 "trtype": "TCP", 00:14:54.244 "adrfam": "IPv4", 00:14:54.244 "traddr": "10.0.0.1", 00:14:54.244 "trsvcid": "33816" 00:14:54.244 }, 00:14:54.244 "auth": { 00:14:54.244 "state": "completed", 00:14:54.244 "digest": "sha256", 00:14:54.244 "dhgroup": "ffdhe6144" 00:14:54.244 } 00:14:54.244 } 00:14:54.244 ]' 00:14:54.244 09:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:54.502 09:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:54.502 09:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:54.502 09:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:54.502 09:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:54.502 09:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.502 09:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.502 09:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:54.760 09:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDk2MDQ0ZThjODY3YjcxZGRlMTViYTU4YjZmOWQwM2MxZDllMGUxYzY4OTQzMzg5CaUkbw==: --dhchap-ctrl-secret DHHC-1:01:MjZmNzVjYTgxYWU0MzQxYzViYmY4NmIyOTY1YWZlYja6QNRW: 00:14:54.760 09:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:02:NDk2MDQ0ZThjODY3YjcxZGRlMTViYTU4YjZmOWQwM2MxZDllMGUxYzY4OTQzMzg5CaUkbw==: --dhchap-ctrl-secret DHHC-1:01:MjZmNzVjYTgxYWU0MzQxYzViYmY4NmIyOTY1YWZlYja6QNRW: 00:14:55.327 09:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:55.327 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:55.327 09:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:14:55.327 09:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.327 09:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.327 09:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.327 09:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:55.327 09:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:55.327 09:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:55.585 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:14:55.585 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:55.585 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:55.585 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:55.585 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:55.585 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:55.586 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key3 00:14:55.586 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.586 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.586 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.586 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:55.586 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:55.586 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:55.843 00:14:56.101 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:56.101 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:56.101 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:56.101 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:56.101 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:56.101 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.101 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.101 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.101 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:56.101 { 00:14:56.101 "cntlid": 39, 00:14:56.101 "qid": 0, 00:14:56.101 "state": "enabled", 00:14:56.101 "thread": "nvmf_tgt_poll_group_000", 00:14:56.101 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:14:56.101 "listen_address": { 00:14:56.101 "trtype": "TCP", 00:14:56.101 "adrfam": "IPv4", 00:14:56.101 "traddr": "10.0.0.3", 00:14:56.101 "trsvcid": "4420" 00:14:56.101 }, 00:14:56.101 "peer_address": { 00:14:56.101 "trtype": "TCP", 00:14:56.101 "adrfam": "IPv4", 00:14:56.101 "traddr": "10.0.0.1", 00:14:56.101 "trsvcid": "33848" 00:14:56.101 }, 00:14:56.101 "auth": { 00:14:56.101 "state": "completed", 00:14:56.101 "digest": "sha256", 00:14:56.101 "dhgroup": "ffdhe6144" 00:14:56.101 } 00:14:56.101 } 00:14:56.101 ]' 00:14:56.101 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:56.360 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:56.360 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:56.360 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:56.360 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:56.360 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:56.360 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:56.360 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.618 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODVkNDM0YmQ2YTIxOWFmOTlmNGM3MjVlYzM1MzQwNmYyNjRmZjJiYjI3NzYxNTE0YTI1NTNlNGIwYTFmMjk5MDSC2qw=: 00:14:56.618 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:03:ODVkNDM0YmQ2YTIxOWFmOTlmNGM3MjVlYzM1MzQwNmYyNjRmZjJiYjI3NzYxNTE0YTI1NTNlNGIwYTFmMjk5MDSC2qw=: 00:14:57.184 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:57.184 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:57.184 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:14:57.184 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.184 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.184 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.184 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:57.184 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:57.184 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:57.184 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:57.442 09:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:14:57.442 09:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:57.442 09:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:57.442 09:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:57.442 09:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:57.442 09:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:57.442 09:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:57.442 09:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.442 09:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.442 09:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.442 09:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:57.442 09:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:57.442 09:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:58.007 00:14:58.007 09:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:58.007 09:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:58.007 09:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.264 09:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.264 09:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.264 09:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.264 09:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.264 09:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.264 09:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:58.264 { 00:14:58.264 "cntlid": 41, 00:14:58.264 "qid": 0, 00:14:58.264 "state": "enabled", 00:14:58.264 "thread": "nvmf_tgt_poll_group_000", 00:14:58.264 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:14:58.264 "listen_address": { 00:14:58.264 "trtype": "TCP", 00:14:58.264 "adrfam": "IPv4", 00:14:58.264 "traddr": "10.0.0.3", 00:14:58.264 "trsvcid": "4420" 00:14:58.264 }, 00:14:58.264 "peer_address": { 00:14:58.264 "trtype": "TCP", 00:14:58.264 "adrfam": "IPv4", 00:14:58.264 "traddr": "10.0.0.1", 00:14:58.264 "trsvcid": "33860" 00:14:58.264 }, 00:14:58.264 "auth": { 00:14:58.264 "state": "completed", 00:14:58.264 "digest": "sha256", 00:14:58.264 "dhgroup": "ffdhe8192" 00:14:58.264 } 00:14:58.264 } 00:14:58.264 ]' 00:14:58.264 09:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:58.264 09:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:58.264 09:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:58.264 09:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:58.264 09:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:58.264 09:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.264 09:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.264 09:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.522 09:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTNjZTZjZGRjMDU1OTcyNWI5MzNhNTM3NDU3NTUxMmRiZjVhMzRjZThmMWYwZDI1YyatcQ==: --dhchap-ctrl-secret DHHC-1:03:OWRmZDExMjkzODNlYWJhNGE4ZDczMWFhOGY4NmYxMDYyMDgyNDc1N2Q4ODc5NTIyZTcyY2U1YmY5YTdlMzFmY4f98es=: 00:14:58.522 09:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:00:MTNjZTZjZGRjMDU1OTcyNWI5MzNhNTM3NDU3NTUxMmRiZjVhMzRjZThmMWYwZDI1YyatcQ==: --dhchap-ctrl-secret DHHC-1:03:OWRmZDExMjkzODNlYWJhNGE4ZDczMWFhOGY4NmYxMDYyMDgyNDc1N2Q4ODc5NTIyZTcyY2U1YmY5YTdlMzFmY4f98es=: 00:14:59.090 09:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.349 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.349 09:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:14:59.349 09:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.349 09:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.349 09:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.349 09:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:59.349 09:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:59.349 09:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:59.349 09:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:14:59.349 09:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:59.349 09:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:59.349 09:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:59.349 09:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:59.349 09:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:59.349 09:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:59.349 09:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.349 09:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.349 09:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.349 09:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:59.349 09:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:59.349 09:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:59.917 00:15:00.176 09:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:00.176 09:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:00.176 09:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.176 09:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:00.176 09:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:00.176 09:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.176 09:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.176 09:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.176 09:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:00.176 { 00:15:00.176 "cntlid": 43, 00:15:00.176 "qid": 0, 00:15:00.176 "state": "enabled", 00:15:00.176 "thread": "nvmf_tgt_poll_group_000", 00:15:00.176 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:15:00.176 "listen_address": { 00:15:00.176 "trtype": "TCP", 00:15:00.176 "adrfam": "IPv4", 00:15:00.176 "traddr": "10.0.0.3", 00:15:00.176 "trsvcid": "4420" 00:15:00.176 }, 00:15:00.176 "peer_address": { 00:15:00.176 "trtype": "TCP", 00:15:00.176 "adrfam": "IPv4", 00:15:00.176 "traddr": "10.0.0.1", 00:15:00.176 "trsvcid": "33908" 00:15:00.176 }, 00:15:00.176 "auth": { 00:15:00.176 "state": "completed", 00:15:00.176 "digest": "sha256", 00:15:00.176 "dhgroup": "ffdhe8192" 00:15:00.176 } 00:15:00.176 } 00:15:00.176 ]' 00:15:00.176 09:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:00.435 09:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:00.435 09:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:00.435 09:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:00.435 09:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:00.435 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.435 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.435 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.738 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODUzZGYyNDNhMmFlNDgwYzJhMTQwYzQyNWFiZWQxZWSsSQKw: --dhchap-ctrl-secret DHHC-1:02:MDJiNTZlMTJjYWI0NzhhYjQ1Mjg3Y2IzZTBjZmQ4NDZhNzIxMTRmOWFmOTcyY2E4986qcg==: 00:15:00.738 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:01:ODUzZGYyNDNhMmFlNDgwYzJhMTQwYzQyNWFiZWQxZWSsSQKw: --dhchap-ctrl-secret DHHC-1:02:MDJiNTZlMTJjYWI0NzhhYjQ1Mjg3Y2IzZTBjZmQ4NDZhNzIxMTRmOWFmOTcyY2E4986qcg==: 00:15:01.356 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.356 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.356 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:15:01.356 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.356 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.356 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.356 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:01.356 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:01.356 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:01.615 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:15:01.615 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:01.615 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:01.615 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:01.615 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:01.615 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.615 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:01.615 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.615 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.615 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.615 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:01.615 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:01.615 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:02.182 00:15:02.182 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:02.182 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:02.182 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:02.440 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:02.440 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:02.440 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.440 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.440 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.440 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:02.440 { 00:15:02.440 "cntlid": 45, 00:15:02.440 "qid": 0, 00:15:02.440 "state": "enabled", 00:15:02.440 "thread": "nvmf_tgt_poll_group_000", 00:15:02.440 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:15:02.440 "listen_address": { 00:15:02.440 "trtype": "TCP", 00:15:02.440 "adrfam": "IPv4", 00:15:02.440 "traddr": "10.0.0.3", 00:15:02.440 "trsvcid": "4420" 00:15:02.440 }, 00:15:02.440 "peer_address": { 00:15:02.440 "trtype": "TCP", 00:15:02.440 "adrfam": "IPv4", 00:15:02.440 "traddr": "10.0.0.1", 00:15:02.440 "trsvcid": "57718" 00:15:02.440 }, 00:15:02.440 "auth": { 00:15:02.440 "state": "completed", 00:15:02.440 "digest": "sha256", 00:15:02.440 "dhgroup": "ffdhe8192" 00:15:02.440 } 00:15:02.440 } 00:15:02.440 ]' 00:15:02.440 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:02.440 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:02.440 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:02.440 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:02.440 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:02.440 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:02.440 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:02.440 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:02.698 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDk2MDQ0ZThjODY3YjcxZGRlMTViYTU4YjZmOWQwM2MxZDllMGUxYzY4OTQzMzg5CaUkbw==: --dhchap-ctrl-secret DHHC-1:01:MjZmNzVjYTgxYWU0MzQxYzViYmY4NmIyOTY1YWZlYja6QNRW: 00:15:02.698 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:02:NDk2MDQ0ZThjODY3YjcxZGRlMTViYTU4YjZmOWQwM2MxZDllMGUxYzY4OTQzMzg5CaUkbw==: --dhchap-ctrl-secret DHHC-1:01:MjZmNzVjYTgxYWU0MzQxYzViYmY4NmIyOTY1YWZlYja6QNRW: 00:15:03.266 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:03.266 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:03.266 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:15:03.266 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.266 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.266 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.266 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:03.266 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:03.266 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:03.525 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:15:03.525 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:03.525 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:03.525 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:03.525 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:03.525 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:03.525 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key3 00:15:03.525 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.525 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.525 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.525 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:03.525 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:03.525 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:04.094 00:15:04.094 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:04.094 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:04.094 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:04.354 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:04.354 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:04.354 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.354 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.354 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.354 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:04.354 { 00:15:04.354 "cntlid": 47, 00:15:04.354 "qid": 0, 00:15:04.354 "state": "enabled", 00:15:04.354 "thread": "nvmf_tgt_poll_group_000", 00:15:04.354 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:15:04.354 "listen_address": { 00:15:04.354 "trtype": "TCP", 00:15:04.354 "adrfam": "IPv4", 00:15:04.354 "traddr": "10.0.0.3", 00:15:04.354 "trsvcid": "4420" 00:15:04.354 }, 00:15:04.354 "peer_address": { 00:15:04.354 "trtype": "TCP", 00:15:04.354 "adrfam": "IPv4", 00:15:04.354 "traddr": "10.0.0.1", 00:15:04.354 "trsvcid": "57740" 00:15:04.354 }, 00:15:04.354 "auth": { 00:15:04.354 "state": "completed", 00:15:04.354 "digest": "sha256", 00:15:04.354 "dhgroup": "ffdhe8192" 00:15:04.354 } 00:15:04.354 } 00:15:04.354 ]' 00:15:04.354 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:04.354 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:04.354 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:04.354 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:04.354 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:04.354 09:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:04.354 09:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:04.354 09:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:04.613 09:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODVkNDM0YmQ2YTIxOWFmOTlmNGM3MjVlYzM1MzQwNmYyNjRmZjJiYjI3NzYxNTE0YTI1NTNlNGIwYTFmMjk5MDSC2qw=: 00:15:04.613 09:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:03:ODVkNDM0YmQ2YTIxOWFmOTlmNGM3MjVlYzM1MzQwNmYyNjRmZjJiYjI3NzYxNTE0YTI1NTNlNGIwYTFmMjk5MDSC2qw=: 00:15:05.179 09:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:05.179 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:05.179 09:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:15:05.179 09:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.179 09:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.179 09:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.179 09:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:05.179 09:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:05.179 09:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:05.179 09:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:05.179 09:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:05.437 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:15:05.437 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:05.437 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:05.437 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:05.437 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:05.437 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:05.437 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:05.437 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.437 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.437 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.437 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:05.437 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:05.437 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:05.695 00:15:05.695 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:05.695 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:05.695 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.953 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.953 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.953 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.953 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.953 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.953 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:05.953 { 00:15:05.953 "cntlid": 49, 00:15:05.953 "qid": 0, 00:15:05.953 "state": "enabled", 00:15:05.953 "thread": "nvmf_tgt_poll_group_000", 00:15:05.953 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:15:05.953 "listen_address": { 00:15:05.953 "trtype": "TCP", 00:15:05.953 "adrfam": "IPv4", 00:15:05.953 "traddr": "10.0.0.3", 00:15:05.953 "trsvcid": "4420" 00:15:05.953 }, 00:15:05.953 "peer_address": { 00:15:05.953 "trtype": "TCP", 00:15:05.953 "adrfam": "IPv4", 00:15:05.954 "traddr": "10.0.0.1", 00:15:05.954 "trsvcid": "57752" 00:15:05.954 }, 00:15:05.954 "auth": { 00:15:05.954 "state": "completed", 00:15:05.954 "digest": "sha384", 00:15:05.954 "dhgroup": "null" 00:15:05.954 } 00:15:05.954 } 00:15:05.954 ]' 00:15:05.954 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:05.954 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:05.954 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:05.954 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:05.954 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:05.954 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.954 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.954 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.212 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTNjZTZjZGRjMDU1OTcyNWI5MzNhNTM3NDU3NTUxMmRiZjVhMzRjZThmMWYwZDI1YyatcQ==: --dhchap-ctrl-secret DHHC-1:03:OWRmZDExMjkzODNlYWJhNGE4ZDczMWFhOGY4NmYxMDYyMDgyNDc1N2Q4ODc5NTIyZTcyY2U1YmY5YTdlMzFmY4f98es=: 00:15:06.212 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:00:MTNjZTZjZGRjMDU1OTcyNWI5MzNhNTM3NDU3NTUxMmRiZjVhMzRjZThmMWYwZDI1YyatcQ==: --dhchap-ctrl-secret DHHC-1:03:OWRmZDExMjkzODNlYWJhNGE4ZDczMWFhOGY4NmYxMDYyMDgyNDc1N2Q4ODc5NTIyZTcyY2U1YmY5YTdlMzFmY4f98es=: 00:15:06.830 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:06.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:06.830 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:15:06.830 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.830 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.830 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.830 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:06.830 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:06.830 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:07.089 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:15:07.089 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:07.089 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:07.089 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:07.089 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:07.089 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:07.089 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:07.089 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.089 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.089 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.089 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:07.089 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:07.089 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:07.347 00:15:07.347 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:07.347 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:07.347 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.604 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.604 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.604 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.604 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.604 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.604 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:07.604 { 00:15:07.604 "cntlid": 51, 00:15:07.604 "qid": 0, 00:15:07.604 "state": "enabled", 00:15:07.604 "thread": "nvmf_tgt_poll_group_000", 00:15:07.604 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:15:07.604 "listen_address": { 00:15:07.604 "trtype": "TCP", 00:15:07.604 "adrfam": "IPv4", 00:15:07.604 "traddr": "10.0.0.3", 00:15:07.604 "trsvcid": "4420" 00:15:07.604 }, 00:15:07.604 "peer_address": { 00:15:07.604 "trtype": "TCP", 00:15:07.604 "adrfam": "IPv4", 00:15:07.604 "traddr": "10.0.0.1", 00:15:07.604 "trsvcid": "57778" 00:15:07.604 }, 00:15:07.604 "auth": { 00:15:07.604 "state": "completed", 00:15:07.604 "digest": "sha384", 00:15:07.604 "dhgroup": "null" 00:15:07.604 } 00:15:07.604 } 00:15:07.604 ]' 00:15:07.604 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:07.604 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:07.604 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:07.604 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:07.604 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:07.861 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:07.861 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.861 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.861 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODUzZGYyNDNhMmFlNDgwYzJhMTQwYzQyNWFiZWQxZWSsSQKw: --dhchap-ctrl-secret DHHC-1:02:MDJiNTZlMTJjYWI0NzhhYjQ1Mjg3Y2IzZTBjZmQ4NDZhNzIxMTRmOWFmOTcyY2E4986qcg==: 00:15:07.861 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:01:ODUzZGYyNDNhMmFlNDgwYzJhMTQwYzQyNWFiZWQxZWSsSQKw: --dhchap-ctrl-secret DHHC-1:02:MDJiNTZlMTJjYWI0NzhhYjQ1Mjg3Y2IzZTBjZmQ4NDZhNzIxMTRmOWFmOTcyY2E4986qcg==: 00:15:08.794 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.794 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.794 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:15:08.794 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.794 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.794 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.794 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:08.794 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:08.794 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:08.794 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:15:08.794 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:08.794 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:08.794 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:08.794 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:08.794 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.794 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:08.794 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.794 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.794 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.794 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:08.794 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:08.794 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:09.053 00:15:09.053 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:09.053 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.053 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:09.312 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.312 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.313 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.313 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.313 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.313 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:09.313 { 00:15:09.313 "cntlid": 53, 00:15:09.313 "qid": 0, 00:15:09.313 "state": "enabled", 00:15:09.313 "thread": "nvmf_tgt_poll_group_000", 00:15:09.313 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:15:09.313 "listen_address": { 00:15:09.313 "trtype": "TCP", 00:15:09.313 "adrfam": "IPv4", 00:15:09.313 "traddr": "10.0.0.3", 00:15:09.313 "trsvcid": "4420" 00:15:09.313 }, 00:15:09.313 "peer_address": { 00:15:09.313 "trtype": "TCP", 00:15:09.313 "adrfam": "IPv4", 00:15:09.313 "traddr": "10.0.0.1", 00:15:09.313 "trsvcid": "57804" 00:15:09.313 }, 00:15:09.313 "auth": { 00:15:09.313 "state": "completed", 00:15:09.313 "digest": "sha384", 00:15:09.313 "dhgroup": "null" 00:15:09.313 } 00:15:09.313 } 00:15:09.313 ]' 00:15:09.313 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:09.313 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:09.313 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:09.313 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:09.313 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:09.313 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.313 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.313 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.571 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDk2MDQ0ZThjODY3YjcxZGRlMTViYTU4YjZmOWQwM2MxZDllMGUxYzY4OTQzMzg5CaUkbw==: --dhchap-ctrl-secret DHHC-1:01:MjZmNzVjYTgxYWU0MzQxYzViYmY4NmIyOTY1YWZlYja6QNRW: 00:15:09.572 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:02:NDk2MDQ0ZThjODY3YjcxZGRlMTViYTU4YjZmOWQwM2MxZDllMGUxYzY4OTQzMzg5CaUkbw==: --dhchap-ctrl-secret DHHC-1:01:MjZmNzVjYTgxYWU0MzQxYzViYmY4NmIyOTY1YWZlYja6QNRW: 00:15:10.142 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.142 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.142 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:15:10.142 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.142 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.142 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.142 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:10.142 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:10.142 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:10.401 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:15:10.401 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:10.401 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:10.401 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:10.401 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:10.401 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.401 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key3 00:15:10.401 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.401 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.401 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.401 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:10.401 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:10.401 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:10.661 00:15:10.661 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:10.661 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.661 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:10.920 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.920 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.920 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.920 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.920 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.920 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:10.920 { 00:15:10.920 "cntlid": 55, 00:15:10.920 "qid": 0, 00:15:10.920 "state": "enabled", 00:15:10.920 "thread": "nvmf_tgt_poll_group_000", 00:15:10.920 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:15:10.920 "listen_address": { 00:15:10.920 "trtype": "TCP", 00:15:10.920 "adrfam": "IPv4", 00:15:10.920 "traddr": "10.0.0.3", 00:15:10.920 "trsvcid": "4420" 00:15:10.920 }, 00:15:10.920 "peer_address": { 00:15:10.920 "trtype": "TCP", 00:15:10.920 "adrfam": "IPv4", 00:15:10.920 "traddr": "10.0.0.1", 00:15:10.920 "trsvcid": "57814" 00:15:10.920 }, 00:15:10.920 "auth": { 00:15:10.920 "state": "completed", 00:15:10.920 "digest": "sha384", 00:15:10.920 "dhgroup": "null" 00:15:10.920 } 00:15:10.920 } 00:15:10.920 ]' 00:15:10.920 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:10.920 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:10.920 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:10.920 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:10.920 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:11.180 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.180 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.180 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.180 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODVkNDM0YmQ2YTIxOWFmOTlmNGM3MjVlYzM1MzQwNmYyNjRmZjJiYjI3NzYxNTE0YTI1NTNlNGIwYTFmMjk5MDSC2qw=: 00:15:11.180 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:03:ODVkNDM0YmQ2YTIxOWFmOTlmNGM3MjVlYzM1MzQwNmYyNjRmZjJiYjI3NzYxNTE0YTI1NTNlNGIwYTFmMjk5MDSC2qw=: 00:15:11.750 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:12.010 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:12.010 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:15:12.010 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.010 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.010 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.010 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:12.010 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:12.010 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:12.010 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:12.010 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:15:12.010 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:12.010 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:12.010 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:12.010 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:12.010 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.010 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:12.010 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.010 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.010 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.010 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:12.010 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:12.010 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:12.579 00:15:12.579 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:12.579 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:12.579 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.579 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.579 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.579 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.579 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.579 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.579 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:12.579 { 00:15:12.579 "cntlid": 57, 00:15:12.579 "qid": 0, 00:15:12.579 "state": "enabled", 00:15:12.579 "thread": "nvmf_tgt_poll_group_000", 00:15:12.579 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:15:12.579 "listen_address": { 00:15:12.579 "trtype": "TCP", 00:15:12.579 "adrfam": "IPv4", 00:15:12.579 "traddr": "10.0.0.3", 00:15:12.579 "trsvcid": "4420" 00:15:12.579 }, 00:15:12.579 "peer_address": { 00:15:12.579 "trtype": "TCP", 00:15:12.579 "adrfam": "IPv4", 00:15:12.579 "traddr": "10.0.0.1", 00:15:12.579 "trsvcid": "47580" 00:15:12.579 }, 00:15:12.579 "auth": { 00:15:12.579 "state": "completed", 00:15:12.579 "digest": "sha384", 00:15:12.579 "dhgroup": "ffdhe2048" 00:15:12.579 } 00:15:12.579 } 00:15:12.579 ]' 00:15:12.579 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:12.839 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:12.839 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:12.839 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:12.839 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:12.839 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.839 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.839 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:13.099 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTNjZTZjZGRjMDU1OTcyNWI5MzNhNTM3NDU3NTUxMmRiZjVhMzRjZThmMWYwZDI1YyatcQ==: --dhchap-ctrl-secret DHHC-1:03:OWRmZDExMjkzODNlYWJhNGE4ZDczMWFhOGY4NmYxMDYyMDgyNDc1N2Q4ODc5NTIyZTcyY2U1YmY5YTdlMzFmY4f98es=: 00:15:13.099 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:00:MTNjZTZjZGRjMDU1OTcyNWI5MzNhNTM3NDU3NTUxMmRiZjVhMzRjZThmMWYwZDI1YyatcQ==: --dhchap-ctrl-secret DHHC-1:03:OWRmZDExMjkzODNlYWJhNGE4ZDczMWFhOGY4NmYxMDYyMDgyNDc1N2Q4ODc5NTIyZTcyY2U1YmY5YTdlMzFmY4f98es=: 00:15:13.666 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.666 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.666 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:15:13.666 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.666 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.666 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.666 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:13.666 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:13.666 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:13.925 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:15:13.925 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:13.925 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:13.925 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:13.925 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:13.925 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.925 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:13.925 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.925 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.925 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.925 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:13.925 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:13.925 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:14.183 00:15:14.183 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:14.183 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.183 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:14.442 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.442 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.442 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.442 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.442 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.442 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:14.442 { 00:15:14.442 "cntlid": 59, 00:15:14.442 "qid": 0, 00:15:14.442 "state": "enabled", 00:15:14.442 "thread": "nvmf_tgt_poll_group_000", 00:15:14.442 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:15:14.442 "listen_address": { 00:15:14.442 "trtype": "TCP", 00:15:14.442 "adrfam": "IPv4", 00:15:14.442 "traddr": "10.0.0.3", 00:15:14.442 "trsvcid": "4420" 00:15:14.442 }, 00:15:14.442 "peer_address": { 00:15:14.442 "trtype": "TCP", 00:15:14.442 "adrfam": "IPv4", 00:15:14.442 "traddr": "10.0.0.1", 00:15:14.442 "trsvcid": "47620" 00:15:14.442 }, 00:15:14.442 "auth": { 00:15:14.442 "state": "completed", 00:15:14.442 "digest": "sha384", 00:15:14.442 "dhgroup": "ffdhe2048" 00:15:14.442 } 00:15:14.442 } 00:15:14.442 ]' 00:15:14.442 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:14.442 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:14.442 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:14.442 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:14.442 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:14.442 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.442 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.442 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.701 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODUzZGYyNDNhMmFlNDgwYzJhMTQwYzQyNWFiZWQxZWSsSQKw: --dhchap-ctrl-secret DHHC-1:02:MDJiNTZlMTJjYWI0NzhhYjQ1Mjg3Y2IzZTBjZmQ4NDZhNzIxMTRmOWFmOTcyY2E4986qcg==: 00:15:14.701 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:01:ODUzZGYyNDNhMmFlNDgwYzJhMTQwYzQyNWFiZWQxZWSsSQKw: --dhchap-ctrl-secret DHHC-1:02:MDJiNTZlMTJjYWI0NzhhYjQ1Mjg3Y2IzZTBjZmQ4NDZhNzIxMTRmOWFmOTcyY2E4986qcg==: 00:15:15.268 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.268 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.268 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:15:15.268 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.268 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.268 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.268 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:15.268 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:15.268 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:15.526 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:15:15.526 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:15.526 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:15.527 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:15.527 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:15.527 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.527 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:15.527 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.527 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.527 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.527 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:15.527 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:15.527 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:15.785 00:15:15.785 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:15.785 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:15.785 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.042 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.042 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.042 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.042 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.042 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.042 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:16.042 { 00:15:16.042 "cntlid": 61, 00:15:16.042 "qid": 0, 00:15:16.042 "state": "enabled", 00:15:16.042 "thread": "nvmf_tgt_poll_group_000", 00:15:16.042 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:15:16.042 "listen_address": { 00:15:16.042 "trtype": "TCP", 00:15:16.042 "adrfam": "IPv4", 00:15:16.042 "traddr": "10.0.0.3", 00:15:16.042 "trsvcid": "4420" 00:15:16.042 }, 00:15:16.042 "peer_address": { 00:15:16.042 "trtype": "TCP", 00:15:16.042 "adrfam": "IPv4", 00:15:16.042 "traddr": "10.0.0.1", 00:15:16.042 "trsvcid": "47642" 00:15:16.042 }, 00:15:16.042 "auth": { 00:15:16.042 "state": "completed", 00:15:16.042 "digest": "sha384", 00:15:16.042 "dhgroup": "ffdhe2048" 00:15:16.042 } 00:15:16.042 } 00:15:16.042 ]' 00:15:16.042 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:16.042 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:16.042 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:16.300 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:16.300 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:16.300 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.300 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.300 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.557 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDk2MDQ0ZThjODY3YjcxZGRlMTViYTU4YjZmOWQwM2MxZDllMGUxYzY4OTQzMzg5CaUkbw==: --dhchap-ctrl-secret DHHC-1:01:MjZmNzVjYTgxYWU0MzQxYzViYmY4NmIyOTY1YWZlYja6QNRW: 00:15:16.557 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:02:NDk2MDQ0ZThjODY3YjcxZGRlMTViYTU4YjZmOWQwM2MxZDllMGUxYzY4OTQzMzg5CaUkbw==: --dhchap-ctrl-secret DHHC-1:01:MjZmNzVjYTgxYWU0MzQxYzViYmY4NmIyOTY1YWZlYja6QNRW: 00:15:17.121 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.121 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.121 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:15:17.121 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.121 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.122 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.122 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:17.122 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:17.122 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:17.380 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:15:17.380 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:17.380 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:17.380 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:17.380 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:17.380 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.380 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key3 00:15:17.380 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.380 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.380 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.380 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:17.380 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:17.380 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:17.711 00:15:17.711 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:17.711 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:17.711 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.711 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.711 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.711 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.711 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.711 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.711 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:17.711 { 00:15:17.711 "cntlid": 63, 00:15:17.711 "qid": 0, 00:15:17.711 "state": "enabled", 00:15:17.711 "thread": "nvmf_tgt_poll_group_000", 00:15:17.711 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:15:17.711 "listen_address": { 00:15:17.711 "trtype": "TCP", 00:15:17.711 "adrfam": "IPv4", 00:15:17.711 "traddr": "10.0.0.3", 00:15:17.711 "trsvcid": "4420" 00:15:17.711 }, 00:15:17.711 "peer_address": { 00:15:17.711 "trtype": "TCP", 00:15:17.711 "adrfam": "IPv4", 00:15:17.711 "traddr": "10.0.0.1", 00:15:17.711 "trsvcid": "47678" 00:15:17.711 }, 00:15:17.711 "auth": { 00:15:17.711 "state": "completed", 00:15:17.711 "digest": "sha384", 00:15:17.711 "dhgroup": "ffdhe2048" 00:15:17.711 } 00:15:17.711 } 00:15:17.711 ]' 00:15:17.711 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:17.968 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:17.968 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:17.968 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:17.968 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:17.968 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.968 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.968 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.226 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODVkNDM0YmQ2YTIxOWFmOTlmNGM3MjVlYzM1MzQwNmYyNjRmZjJiYjI3NzYxNTE0YTI1NTNlNGIwYTFmMjk5MDSC2qw=: 00:15:18.226 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:03:ODVkNDM0YmQ2YTIxOWFmOTlmNGM3MjVlYzM1MzQwNmYyNjRmZjJiYjI3NzYxNTE0YTI1NTNlNGIwYTFmMjk5MDSC2qw=: 00:15:18.793 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.793 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.793 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:15:18.793 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.793 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.793 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.793 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:18.793 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:18.793 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:18.793 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:19.051 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:15:19.051 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:19.051 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:19.051 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:19.051 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:19.051 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.051 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:19.051 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.051 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.051 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.051 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:19.051 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:19.051 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:19.310 00:15:19.310 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:19.310 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:19.310 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:19.568 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.568 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:19.568 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.568 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.568 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.568 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:19.568 { 00:15:19.568 "cntlid": 65, 00:15:19.568 "qid": 0, 00:15:19.568 "state": "enabled", 00:15:19.568 "thread": "nvmf_tgt_poll_group_000", 00:15:19.568 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:15:19.568 "listen_address": { 00:15:19.568 "trtype": "TCP", 00:15:19.568 "adrfam": "IPv4", 00:15:19.568 "traddr": "10.0.0.3", 00:15:19.568 "trsvcid": "4420" 00:15:19.568 }, 00:15:19.568 "peer_address": { 00:15:19.568 "trtype": "TCP", 00:15:19.568 "adrfam": "IPv4", 00:15:19.568 "traddr": "10.0.0.1", 00:15:19.568 "trsvcid": "47700" 00:15:19.568 }, 00:15:19.568 "auth": { 00:15:19.568 "state": "completed", 00:15:19.568 "digest": "sha384", 00:15:19.568 "dhgroup": "ffdhe3072" 00:15:19.568 } 00:15:19.568 } 00:15:19.568 ]' 00:15:19.568 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:19.568 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:19.568 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:19.568 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:19.568 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:19.569 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:19.569 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:19.569 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.827 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTNjZTZjZGRjMDU1OTcyNWI5MzNhNTM3NDU3NTUxMmRiZjVhMzRjZThmMWYwZDI1YyatcQ==: --dhchap-ctrl-secret DHHC-1:03:OWRmZDExMjkzODNlYWJhNGE4ZDczMWFhOGY4NmYxMDYyMDgyNDc1N2Q4ODc5NTIyZTcyY2U1YmY5YTdlMzFmY4f98es=: 00:15:19.827 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:00:MTNjZTZjZGRjMDU1OTcyNWI5MzNhNTM3NDU3NTUxMmRiZjVhMzRjZThmMWYwZDI1YyatcQ==: --dhchap-ctrl-secret DHHC-1:03:OWRmZDExMjkzODNlYWJhNGE4ZDczMWFhOGY4NmYxMDYyMDgyNDc1N2Q4ODc5NTIyZTcyY2U1YmY5YTdlMzFmY4f98es=: 00:15:20.395 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:20.395 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:20.395 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:15:20.395 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.395 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.395 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.395 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:20.395 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:20.395 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:20.654 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:15:20.654 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:20.654 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:20.654 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:20.654 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:20.654 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.654 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.654 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.654 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.654 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.654 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.654 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.654 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.913 00:15:20.913 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:20.913 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.913 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:21.173 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.173 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:21.173 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.173 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.173 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.173 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:21.173 { 00:15:21.173 "cntlid": 67, 00:15:21.173 "qid": 0, 00:15:21.173 "state": "enabled", 00:15:21.173 "thread": "nvmf_tgt_poll_group_000", 00:15:21.173 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:15:21.173 "listen_address": { 00:15:21.173 "trtype": "TCP", 00:15:21.173 "adrfam": "IPv4", 00:15:21.173 "traddr": "10.0.0.3", 00:15:21.173 "trsvcid": "4420" 00:15:21.173 }, 00:15:21.173 "peer_address": { 00:15:21.173 "trtype": "TCP", 00:15:21.173 "adrfam": "IPv4", 00:15:21.173 "traddr": "10.0.0.1", 00:15:21.173 "trsvcid": "47708" 00:15:21.173 }, 00:15:21.173 "auth": { 00:15:21.173 "state": "completed", 00:15:21.173 "digest": "sha384", 00:15:21.173 "dhgroup": "ffdhe3072" 00:15:21.173 } 00:15:21.173 } 00:15:21.173 ]' 00:15:21.173 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:21.173 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:21.173 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:21.432 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:21.432 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:21.432 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:21.432 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:21.432 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.692 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODUzZGYyNDNhMmFlNDgwYzJhMTQwYzQyNWFiZWQxZWSsSQKw: --dhchap-ctrl-secret DHHC-1:02:MDJiNTZlMTJjYWI0NzhhYjQ1Mjg3Y2IzZTBjZmQ4NDZhNzIxMTRmOWFmOTcyY2E4986qcg==: 00:15:21.692 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:01:ODUzZGYyNDNhMmFlNDgwYzJhMTQwYzQyNWFiZWQxZWSsSQKw: --dhchap-ctrl-secret DHHC-1:02:MDJiNTZlMTJjYWI0NzhhYjQ1Mjg3Y2IzZTBjZmQ4NDZhNzIxMTRmOWFmOTcyY2E4986qcg==: 00:15:22.291 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:22.291 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:22.291 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:15:22.291 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.291 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.291 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.291 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:22.291 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:22.291 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:22.291 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:15:22.291 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:22.291 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:22.291 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:22.291 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:22.291 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.291 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:22.291 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.291 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.291 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.291 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:22.291 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:22.291 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:22.859 00:15:22.859 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:22.859 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:22.859 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.859 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.859 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.859 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.859 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.859 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.859 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:22.859 { 00:15:22.859 "cntlid": 69, 00:15:22.859 "qid": 0, 00:15:22.859 "state": "enabled", 00:15:22.859 "thread": "nvmf_tgt_poll_group_000", 00:15:22.859 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:15:22.859 "listen_address": { 00:15:22.859 "trtype": "TCP", 00:15:22.859 "adrfam": "IPv4", 00:15:22.859 "traddr": "10.0.0.3", 00:15:22.859 "trsvcid": "4420" 00:15:22.859 }, 00:15:22.859 "peer_address": { 00:15:22.859 "trtype": "TCP", 00:15:22.859 "adrfam": "IPv4", 00:15:22.859 "traddr": "10.0.0.1", 00:15:22.859 "trsvcid": "35202" 00:15:22.859 }, 00:15:22.859 "auth": { 00:15:22.859 "state": "completed", 00:15:22.859 "digest": "sha384", 00:15:22.859 "dhgroup": "ffdhe3072" 00:15:22.859 } 00:15:22.859 } 00:15:22.859 ]' 00:15:22.859 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:22.859 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:23.118 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:23.118 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:23.118 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:23.118 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:23.118 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:23.118 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:23.377 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDk2MDQ0ZThjODY3YjcxZGRlMTViYTU4YjZmOWQwM2MxZDllMGUxYzY4OTQzMzg5CaUkbw==: --dhchap-ctrl-secret DHHC-1:01:MjZmNzVjYTgxYWU0MzQxYzViYmY4NmIyOTY1YWZlYja6QNRW: 00:15:23.377 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:02:NDk2MDQ0ZThjODY3YjcxZGRlMTViYTU4YjZmOWQwM2MxZDllMGUxYzY4OTQzMzg5CaUkbw==: --dhchap-ctrl-secret DHHC-1:01:MjZmNzVjYTgxYWU0MzQxYzViYmY4NmIyOTY1YWZlYja6QNRW: 00:15:23.945 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.945 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:15:23.945 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.945 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.945 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.945 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:23.945 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:23.945 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:24.204 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:15:24.204 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:24.204 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:24.204 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:24.204 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:24.204 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:24.204 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key3 00:15:24.204 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.204 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.204 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.204 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:24.204 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:24.204 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:24.462 00:15:24.462 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:24.463 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:24.463 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.720 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.720 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.720 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.721 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.721 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.721 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:24.721 { 00:15:24.721 "cntlid": 71, 00:15:24.721 "qid": 0, 00:15:24.721 "state": "enabled", 00:15:24.721 "thread": "nvmf_tgt_poll_group_000", 00:15:24.721 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:15:24.721 "listen_address": { 00:15:24.721 "trtype": "TCP", 00:15:24.721 "adrfam": "IPv4", 00:15:24.721 "traddr": "10.0.0.3", 00:15:24.721 "trsvcid": "4420" 00:15:24.721 }, 00:15:24.721 "peer_address": { 00:15:24.721 "trtype": "TCP", 00:15:24.721 "adrfam": "IPv4", 00:15:24.721 "traddr": "10.0.0.1", 00:15:24.721 "trsvcid": "35224" 00:15:24.721 }, 00:15:24.721 "auth": { 00:15:24.721 "state": "completed", 00:15:24.721 "digest": "sha384", 00:15:24.721 "dhgroup": "ffdhe3072" 00:15:24.721 } 00:15:24.721 } 00:15:24.721 ]' 00:15:24.721 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:24.721 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:24.721 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:24.721 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:24.721 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:24.721 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.721 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.721 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.980 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODVkNDM0YmQ2YTIxOWFmOTlmNGM3MjVlYzM1MzQwNmYyNjRmZjJiYjI3NzYxNTE0YTI1NTNlNGIwYTFmMjk5MDSC2qw=: 00:15:24.980 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:03:ODVkNDM0YmQ2YTIxOWFmOTlmNGM3MjVlYzM1MzQwNmYyNjRmZjJiYjI3NzYxNTE0YTI1NTNlNGIwYTFmMjk5MDSC2qw=: 00:15:25.548 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.548 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.548 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:15:25.548 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.548 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.548 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.548 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:25.548 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:25.548 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:25.548 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:25.807 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:15:25.807 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:25.807 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:25.807 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:25.807 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:25.807 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:25.807 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:25.807 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.807 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.807 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.807 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:25.807 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:25.807 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.066 00:15:26.326 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:26.326 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:26.326 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.326 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.326 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.326 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.326 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.326 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.326 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:26.326 { 00:15:26.326 "cntlid": 73, 00:15:26.326 "qid": 0, 00:15:26.326 "state": "enabled", 00:15:26.326 "thread": "nvmf_tgt_poll_group_000", 00:15:26.326 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:15:26.326 "listen_address": { 00:15:26.326 "trtype": "TCP", 00:15:26.326 "adrfam": "IPv4", 00:15:26.326 "traddr": "10.0.0.3", 00:15:26.326 "trsvcid": "4420" 00:15:26.326 }, 00:15:26.326 "peer_address": { 00:15:26.326 "trtype": "TCP", 00:15:26.326 "adrfam": "IPv4", 00:15:26.326 "traddr": "10.0.0.1", 00:15:26.326 "trsvcid": "35270" 00:15:26.326 }, 00:15:26.326 "auth": { 00:15:26.326 "state": "completed", 00:15:26.326 "digest": "sha384", 00:15:26.326 "dhgroup": "ffdhe4096" 00:15:26.326 } 00:15:26.326 } 00:15:26.326 ]' 00:15:26.326 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:26.585 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:26.585 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:26.585 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:26.585 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:26.585 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.585 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.585 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.843 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTNjZTZjZGRjMDU1OTcyNWI5MzNhNTM3NDU3NTUxMmRiZjVhMzRjZThmMWYwZDI1YyatcQ==: --dhchap-ctrl-secret DHHC-1:03:OWRmZDExMjkzODNlYWJhNGE4ZDczMWFhOGY4NmYxMDYyMDgyNDc1N2Q4ODc5NTIyZTcyY2U1YmY5YTdlMzFmY4f98es=: 00:15:26.843 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:00:MTNjZTZjZGRjMDU1OTcyNWI5MzNhNTM3NDU3NTUxMmRiZjVhMzRjZThmMWYwZDI1YyatcQ==: --dhchap-ctrl-secret DHHC-1:03:OWRmZDExMjkzODNlYWJhNGE4ZDczMWFhOGY4NmYxMDYyMDgyNDc1N2Q4ODc5NTIyZTcyY2U1YmY5YTdlMzFmY4f98es=: 00:15:27.410 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.410 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.410 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:15:27.410 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.410 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.410 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.410 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:27.410 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:27.410 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:27.669 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:15:27.669 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:27.669 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:27.669 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:27.669 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:27.669 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:27.669 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:27.669 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.669 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.669 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.669 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:27.669 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:27.669 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:27.926 00:15:27.926 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:27.926 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:27.926 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.187 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.187 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.187 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.187 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.187 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.187 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:28.187 { 00:15:28.187 "cntlid": 75, 00:15:28.187 "qid": 0, 00:15:28.187 "state": "enabled", 00:15:28.187 "thread": "nvmf_tgt_poll_group_000", 00:15:28.187 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:15:28.187 "listen_address": { 00:15:28.187 "trtype": "TCP", 00:15:28.187 "adrfam": "IPv4", 00:15:28.187 "traddr": "10.0.0.3", 00:15:28.187 "trsvcid": "4420" 00:15:28.187 }, 00:15:28.187 "peer_address": { 00:15:28.187 "trtype": "TCP", 00:15:28.187 "adrfam": "IPv4", 00:15:28.187 "traddr": "10.0.0.1", 00:15:28.187 "trsvcid": "35308" 00:15:28.187 }, 00:15:28.187 "auth": { 00:15:28.187 "state": "completed", 00:15:28.187 "digest": "sha384", 00:15:28.187 "dhgroup": "ffdhe4096" 00:15:28.187 } 00:15:28.187 } 00:15:28.187 ]' 00:15:28.187 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:28.187 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:28.187 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:28.187 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:28.187 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:28.187 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.187 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.187 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:28.506 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODUzZGYyNDNhMmFlNDgwYzJhMTQwYzQyNWFiZWQxZWSsSQKw: --dhchap-ctrl-secret DHHC-1:02:MDJiNTZlMTJjYWI0NzhhYjQ1Mjg3Y2IzZTBjZmQ4NDZhNzIxMTRmOWFmOTcyY2E4986qcg==: 00:15:28.506 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:01:ODUzZGYyNDNhMmFlNDgwYzJhMTQwYzQyNWFiZWQxZWSsSQKw: --dhchap-ctrl-secret DHHC-1:02:MDJiNTZlMTJjYWI0NzhhYjQ1Mjg3Y2IzZTBjZmQ4NDZhNzIxMTRmOWFmOTcyY2E4986qcg==: 00:15:29.090 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:29.090 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:29.090 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:15:29.090 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.090 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.090 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.090 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:29.090 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:29.090 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:29.358 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:15:29.358 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:29.358 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:29.358 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:29.358 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:29.358 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:29.358 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.358 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.358 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.358 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.358 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.358 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.358 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.615 00:15:29.615 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:29.615 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.615 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:29.872 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.872 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.872 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.872 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.872 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.872 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:29.872 { 00:15:29.872 "cntlid": 77, 00:15:29.872 "qid": 0, 00:15:29.872 "state": "enabled", 00:15:29.872 "thread": "nvmf_tgt_poll_group_000", 00:15:29.872 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:15:29.872 "listen_address": { 00:15:29.872 "trtype": "TCP", 00:15:29.872 "adrfam": "IPv4", 00:15:29.872 "traddr": "10.0.0.3", 00:15:29.872 "trsvcid": "4420" 00:15:29.872 }, 00:15:29.872 "peer_address": { 00:15:29.872 "trtype": "TCP", 00:15:29.872 "adrfam": "IPv4", 00:15:29.872 "traddr": "10.0.0.1", 00:15:29.872 "trsvcid": "35330" 00:15:29.872 }, 00:15:29.872 "auth": { 00:15:29.872 "state": "completed", 00:15:29.872 "digest": "sha384", 00:15:29.872 "dhgroup": "ffdhe4096" 00:15:29.872 } 00:15:29.872 } 00:15:29.872 ]' 00:15:29.872 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:29.872 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:29.872 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:30.131 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:30.131 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:30.131 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.131 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.131 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:30.388 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDk2MDQ0ZThjODY3YjcxZGRlMTViYTU4YjZmOWQwM2MxZDllMGUxYzY4OTQzMzg5CaUkbw==: --dhchap-ctrl-secret DHHC-1:01:MjZmNzVjYTgxYWU0MzQxYzViYmY4NmIyOTY1YWZlYja6QNRW: 00:15:30.388 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:02:NDk2MDQ0ZThjODY3YjcxZGRlMTViYTU4YjZmOWQwM2MxZDllMGUxYzY4OTQzMzg5CaUkbw==: --dhchap-ctrl-secret DHHC-1:01:MjZmNzVjYTgxYWU0MzQxYzViYmY4NmIyOTY1YWZlYja6QNRW: 00:15:30.954 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.954 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:15:30.955 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.955 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.955 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.955 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:30.955 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:30.955 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:31.212 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:15:31.212 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:31.212 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:31.212 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:31.213 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:31.213 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.213 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key3 00:15:31.213 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.213 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.213 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.213 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:31.213 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:31.213 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:31.470 00:15:31.470 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:31.470 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:31.470 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.728 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.729 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:31.729 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.729 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.729 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.729 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:31.729 { 00:15:31.729 "cntlid": 79, 00:15:31.729 "qid": 0, 00:15:31.729 "state": "enabled", 00:15:31.729 "thread": "nvmf_tgt_poll_group_000", 00:15:31.729 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:15:31.729 "listen_address": { 00:15:31.729 "trtype": "TCP", 00:15:31.729 "adrfam": "IPv4", 00:15:31.729 "traddr": "10.0.0.3", 00:15:31.729 "trsvcid": "4420" 00:15:31.729 }, 00:15:31.729 "peer_address": { 00:15:31.729 "trtype": "TCP", 00:15:31.729 "adrfam": "IPv4", 00:15:31.729 "traddr": "10.0.0.1", 00:15:31.729 "trsvcid": "35362" 00:15:31.729 }, 00:15:31.729 "auth": { 00:15:31.729 "state": "completed", 00:15:31.729 "digest": "sha384", 00:15:31.729 "dhgroup": "ffdhe4096" 00:15:31.729 } 00:15:31.729 } 00:15:31.729 ]' 00:15:31.729 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:31.729 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:31.729 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:31.729 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:31.729 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:31.729 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:31.729 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.729 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.987 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODVkNDM0YmQ2YTIxOWFmOTlmNGM3MjVlYzM1MzQwNmYyNjRmZjJiYjI3NzYxNTE0YTI1NTNlNGIwYTFmMjk5MDSC2qw=: 00:15:31.987 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:03:ODVkNDM0YmQ2YTIxOWFmOTlmNGM3MjVlYzM1MzQwNmYyNjRmZjJiYjI3NzYxNTE0YTI1NTNlNGIwYTFmMjk5MDSC2qw=: 00:15:32.553 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.553 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.553 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:15:32.554 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.554 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.554 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.554 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:32.554 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:32.554 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:32.554 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:32.812 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:15:32.812 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:32.812 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:32.812 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:32.812 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:32.812 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.812 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:32.812 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.812 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.812 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.812 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:32.812 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:32.812 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.378 00:15:33.378 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:33.378 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.378 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:33.636 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.636 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:33.636 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.636 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.636 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.636 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:33.636 { 00:15:33.636 "cntlid": 81, 00:15:33.636 "qid": 0, 00:15:33.636 "state": "enabled", 00:15:33.636 "thread": "nvmf_tgt_poll_group_000", 00:15:33.636 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:15:33.636 "listen_address": { 00:15:33.636 "trtype": "TCP", 00:15:33.636 "adrfam": "IPv4", 00:15:33.636 "traddr": "10.0.0.3", 00:15:33.636 "trsvcid": "4420" 00:15:33.636 }, 00:15:33.636 "peer_address": { 00:15:33.636 "trtype": "TCP", 00:15:33.636 "adrfam": "IPv4", 00:15:33.636 "traddr": "10.0.0.1", 00:15:33.636 "trsvcid": "45508" 00:15:33.636 }, 00:15:33.636 "auth": { 00:15:33.636 "state": "completed", 00:15:33.636 "digest": "sha384", 00:15:33.636 "dhgroup": "ffdhe6144" 00:15:33.636 } 00:15:33.636 } 00:15:33.636 ]' 00:15:33.636 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:33.636 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:33.636 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:33.636 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:33.636 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:33.636 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:33.636 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:33.636 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:33.895 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTNjZTZjZGRjMDU1OTcyNWI5MzNhNTM3NDU3NTUxMmRiZjVhMzRjZThmMWYwZDI1YyatcQ==: --dhchap-ctrl-secret DHHC-1:03:OWRmZDExMjkzODNlYWJhNGE4ZDczMWFhOGY4NmYxMDYyMDgyNDc1N2Q4ODc5NTIyZTcyY2U1YmY5YTdlMzFmY4f98es=: 00:15:33.895 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:00:MTNjZTZjZGRjMDU1OTcyNWI5MzNhNTM3NDU3NTUxMmRiZjVhMzRjZThmMWYwZDI1YyatcQ==: --dhchap-ctrl-secret DHHC-1:03:OWRmZDExMjkzODNlYWJhNGE4ZDczMWFhOGY4NmYxMDYyMDgyNDc1N2Q4ODc5NTIyZTcyY2U1YmY5YTdlMzFmY4f98es=: 00:15:34.462 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.462 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.462 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:15:34.463 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.463 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.463 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.463 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:34.463 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:34.463 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:34.721 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:15:34.721 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:34.721 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:34.721 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:34.721 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:34.721 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:34.721 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:34.721 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.721 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.721 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.721 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:34.721 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:34.721 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.359 00:15:35.359 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:35.359 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:35.359 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.359 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.359 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.359 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.359 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.359 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.359 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:35.359 { 00:15:35.359 "cntlid": 83, 00:15:35.359 "qid": 0, 00:15:35.359 "state": "enabled", 00:15:35.359 "thread": "nvmf_tgt_poll_group_000", 00:15:35.359 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:15:35.359 "listen_address": { 00:15:35.359 "trtype": "TCP", 00:15:35.359 "adrfam": "IPv4", 00:15:35.359 "traddr": "10.0.0.3", 00:15:35.359 "trsvcid": "4420" 00:15:35.359 }, 00:15:35.359 "peer_address": { 00:15:35.359 "trtype": "TCP", 00:15:35.359 "adrfam": "IPv4", 00:15:35.359 "traddr": "10.0.0.1", 00:15:35.359 "trsvcid": "45538" 00:15:35.359 }, 00:15:35.359 "auth": { 00:15:35.359 "state": "completed", 00:15:35.359 "digest": "sha384", 00:15:35.359 "dhgroup": "ffdhe6144" 00:15:35.359 } 00:15:35.359 } 00:15:35.359 ]' 00:15:35.359 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:35.688 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:35.688 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:35.688 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:35.688 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:35.688 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.688 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.688 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.688 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODUzZGYyNDNhMmFlNDgwYzJhMTQwYzQyNWFiZWQxZWSsSQKw: --dhchap-ctrl-secret DHHC-1:02:MDJiNTZlMTJjYWI0NzhhYjQ1Mjg3Y2IzZTBjZmQ4NDZhNzIxMTRmOWFmOTcyY2E4986qcg==: 00:15:35.688 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:01:ODUzZGYyNDNhMmFlNDgwYzJhMTQwYzQyNWFiZWQxZWSsSQKw: --dhchap-ctrl-secret DHHC-1:02:MDJiNTZlMTJjYWI0NzhhYjQ1Mjg3Y2IzZTBjZmQ4NDZhNzIxMTRmOWFmOTcyY2E4986qcg==: 00:15:36.255 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.255 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.255 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:15:36.255 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.255 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.255 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.255 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:36.255 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:36.255 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:36.514 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:15:36.514 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:36.514 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:36.514 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:36.514 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:36.514 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.514 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.514 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.514 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.514 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.514 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.514 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.514 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.082 00:15:37.082 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:37.082 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:37.082 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.082 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.082 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.082 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.082 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.082 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.082 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:37.082 { 00:15:37.082 "cntlid": 85, 00:15:37.082 "qid": 0, 00:15:37.082 "state": "enabled", 00:15:37.082 "thread": "nvmf_tgt_poll_group_000", 00:15:37.082 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:15:37.082 "listen_address": { 00:15:37.082 "trtype": "TCP", 00:15:37.082 "adrfam": "IPv4", 00:15:37.082 "traddr": "10.0.0.3", 00:15:37.082 "trsvcid": "4420" 00:15:37.082 }, 00:15:37.082 "peer_address": { 00:15:37.082 "trtype": "TCP", 00:15:37.082 "adrfam": "IPv4", 00:15:37.082 "traddr": "10.0.0.1", 00:15:37.082 "trsvcid": "45570" 00:15:37.082 }, 00:15:37.082 "auth": { 00:15:37.082 "state": "completed", 00:15:37.082 "digest": "sha384", 00:15:37.082 "dhgroup": "ffdhe6144" 00:15:37.082 } 00:15:37.082 } 00:15:37.082 ]' 00:15:37.083 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:37.341 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:37.341 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:37.341 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:37.341 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:37.341 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.341 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.341 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.598 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDk2MDQ0ZThjODY3YjcxZGRlMTViYTU4YjZmOWQwM2MxZDllMGUxYzY4OTQzMzg5CaUkbw==: --dhchap-ctrl-secret DHHC-1:01:MjZmNzVjYTgxYWU0MzQxYzViYmY4NmIyOTY1YWZlYja6QNRW: 00:15:37.598 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:02:NDk2MDQ0ZThjODY3YjcxZGRlMTViYTU4YjZmOWQwM2MxZDllMGUxYzY4OTQzMzg5CaUkbw==: --dhchap-ctrl-secret DHHC-1:01:MjZmNzVjYTgxYWU0MzQxYzViYmY4NmIyOTY1YWZlYja6QNRW: 00:15:38.164 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.164 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:15:38.164 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.164 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.164 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.164 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:38.164 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:38.164 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:38.423 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:15:38.423 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:38.423 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:38.423 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:38.423 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:38.423 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.423 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key3 00:15:38.423 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.423 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.423 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.423 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:38.423 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:38.423 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:38.682 00:15:38.682 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:38.682 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:38.682 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:38.941 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.941 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:38.941 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.941 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.941 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.941 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:38.941 { 00:15:38.941 "cntlid": 87, 00:15:38.941 "qid": 0, 00:15:38.941 "state": "enabled", 00:15:38.941 "thread": "nvmf_tgt_poll_group_000", 00:15:38.941 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:15:38.941 "listen_address": { 00:15:38.941 "trtype": "TCP", 00:15:38.941 "adrfam": "IPv4", 00:15:38.941 "traddr": "10.0.0.3", 00:15:38.941 "trsvcid": "4420" 00:15:38.941 }, 00:15:38.941 "peer_address": { 00:15:38.941 "trtype": "TCP", 00:15:38.941 "adrfam": "IPv4", 00:15:38.941 "traddr": "10.0.0.1", 00:15:38.941 "trsvcid": "45588" 00:15:38.941 }, 00:15:38.941 "auth": { 00:15:38.941 "state": "completed", 00:15:38.941 "digest": "sha384", 00:15:38.941 "dhgroup": "ffdhe6144" 00:15:38.941 } 00:15:38.941 } 00:15:38.941 ]' 00:15:38.941 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:38.941 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:38.941 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:38.941 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:38.941 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:39.200 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.200 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.200 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.200 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODVkNDM0YmQ2YTIxOWFmOTlmNGM3MjVlYzM1MzQwNmYyNjRmZjJiYjI3NzYxNTE0YTI1NTNlNGIwYTFmMjk5MDSC2qw=: 00:15:39.200 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:03:ODVkNDM0YmQ2YTIxOWFmOTlmNGM3MjVlYzM1MzQwNmYyNjRmZjJiYjI3NzYxNTE0YTI1NTNlNGIwYTFmMjk5MDSC2qw=: 00:15:39.768 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.768 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.768 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:15:39.768 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.768 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.768 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.768 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:39.768 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:39.768 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:39.768 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:40.026 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:15:40.026 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:40.026 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:40.026 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:40.026 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:40.026 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.026 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.026 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.026 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.026 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.026 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.026 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.026 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.592 00:15:40.592 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:40.592 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.592 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:40.852 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.852 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.852 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.852 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.852 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.852 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:40.852 { 00:15:40.852 "cntlid": 89, 00:15:40.852 "qid": 0, 00:15:40.852 "state": "enabled", 00:15:40.852 "thread": "nvmf_tgt_poll_group_000", 00:15:40.852 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:15:40.852 "listen_address": { 00:15:40.852 "trtype": "TCP", 00:15:40.852 "adrfam": "IPv4", 00:15:40.852 "traddr": "10.0.0.3", 00:15:40.852 "trsvcid": "4420" 00:15:40.852 }, 00:15:40.852 "peer_address": { 00:15:40.852 "trtype": "TCP", 00:15:40.852 "adrfam": "IPv4", 00:15:40.852 "traddr": "10.0.0.1", 00:15:40.852 "trsvcid": "45634" 00:15:40.852 }, 00:15:40.852 "auth": { 00:15:40.852 "state": "completed", 00:15:40.852 "digest": "sha384", 00:15:40.852 "dhgroup": "ffdhe8192" 00:15:40.852 } 00:15:40.852 } 00:15:40.852 ]' 00:15:40.852 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:40.852 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:40.852 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:40.852 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:40.852 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:41.111 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.111 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.111 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.111 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTNjZTZjZGRjMDU1OTcyNWI5MzNhNTM3NDU3NTUxMmRiZjVhMzRjZThmMWYwZDI1YyatcQ==: --dhchap-ctrl-secret DHHC-1:03:OWRmZDExMjkzODNlYWJhNGE4ZDczMWFhOGY4NmYxMDYyMDgyNDc1N2Q4ODc5NTIyZTcyY2U1YmY5YTdlMzFmY4f98es=: 00:15:41.111 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:00:MTNjZTZjZGRjMDU1OTcyNWI5MzNhNTM3NDU3NTUxMmRiZjVhMzRjZThmMWYwZDI1YyatcQ==: --dhchap-ctrl-secret DHHC-1:03:OWRmZDExMjkzODNlYWJhNGE4ZDczMWFhOGY4NmYxMDYyMDgyNDc1N2Q4ODc5NTIyZTcyY2U1YmY5YTdlMzFmY4f98es=: 00:15:41.677 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.677 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.677 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:15:41.677 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.677 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.677 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.677 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:41.677 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:41.677 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:42.019 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:15:42.019 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:42.019 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:42.020 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:42.020 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:42.020 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.020 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.020 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.020 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.020 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.020 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.020 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.020 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.586 00:15:42.586 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:42.586 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:42.586 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.844 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.844 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.844 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.844 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.844 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.844 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:42.844 { 00:15:42.844 "cntlid": 91, 00:15:42.844 "qid": 0, 00:15:42.844 "state": "enabled", 00:15:42.844 "thread": "nvmf_tgt_poll_group_000", 00:15:42.844 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:15:42.844 "listen_address": { 00:15:42.844 "trtype": "TCP", 00:15:42.844 "adrfam": "IPv4", 00:15:42.844 "traddr": "10.0.0.3", 00:15:42.844 "trsvcid": "4420" 00:15:42.844 }, 00:15:42.844 "peer_address": { 00:15:42.844 "trtype": "TCP", 00:15:42.844 "adrfam": "IPv4", 00:15:42.844 "traddr": "10.0.0.1", 00:15:42.844 "trsvcid": "39730" 00:15:42.844 }, 00:15:42.844 "auth": { 00:15:42.844 "state": "completed", 00:15:42.844 "digest": "sha384", 00:15:42.844 "dhgroup": "ffdhe8192" 00:15:42.844 } 00:15:42.844 } 00:15:42.844 ]' 00:15:42.844 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:42.844 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:42.844 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:42.844 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:42.844 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:42.844 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.844 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.844 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.102 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODUzZGYyNDNhMmFlNDgwYzJhMTQwYzQyNWFiZWQxZWSsSQKw: --dhchap-ctrl-secret DHHC-1:02:MDJiNTZlMTJjYWI0NzhhYjQ1Mjg3Y2IzZTBjZmQ4NDZhNzIxMTRmOWFmOTcyY2E4986qcg==: 00:15:43.102 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:01:ODUzZGYyNDNhMmFlNDgwYzJhMTQwYzQyNWFiZWQxZWSsSQKw: --dhchap-ctrl-secret DHHC-1:02:MDJiNTZlMTJjYWI0NzhhYjQ1Mjg3Y2IzZTBjZmQ4NDZhNzIxMTRmOWFmOTcyY2E4986qcg==: 00:15:43.669 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.669 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.669 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:15:43.669 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.669 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.669 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.669 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:43.669 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:43.669 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:43.927 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:15:43.927 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:43.927 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:43.927 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:43.927 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:43.927 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.927 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.927 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.927 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.927 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.927 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.927 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.927 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.492 00:15:44.492 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:44.492 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.492 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:44.751 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.751 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.751 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.751 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.751 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.751 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:44.751 { 00:15:44.751 "cntlid": 93, 00:15:44.751 "qid": 0, 00:15:44.751 "state": "enabled", 00:15:44.751 "thread": "nvmf_tgt_poll_group_000", 00:15:44.751 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:15:44.751 "listen_address": { 00:15:44.751 "trtype": "TCP", 00:15:44.751 "adrfam": "IPv4", 00:15:44.751 "traddr": "10.0.0.3", 00:15:44.751 "trsvcid": "4420" 00:15:44.751 }, 00:15:44.751 "peer_address": { 00:15:44.751 "trtype": "TCP", 00:15:44.751 "adrfam": "IPv4", 00:15:44.751 "traddr": "10.0.0.1", 00:15:44.751 "trsvcid": "39754" 00:15:44.751 }, 00:15:44.751 "auth": { 00:15:44.751 "state": "completed", 00:15:44.751 "digest": "sha384", 00:15:44.751 "dhgroup": "ffdhe8192" 00:15:44.751 } 00:15:44.751 } 00:15:44.751 ]' 00:15:44.751 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:44.751 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:45.009 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:45.009 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:45.009 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:45.009 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.009 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.009 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.269 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDk2MDQ0ZThjODY3YjcxZGRlMTViYTU4YjZmOWQwM2MxZDllMGUxYzY4OTQzMzg5CaUkbw==: --dhchap-ctrl-secret DHHC-1:01:MjZmNzVjYTgxYWU0MzQxYzViYmY4NmIyOTY1YWZlYja6QNRW: 00:15:45.269 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:02:NDk2MDQ0ZThjODY3YjcxZGRlMTViYTU4YjZmOWQwM2MxZDllMGUxYzY4OTQzMzg5CaUkbw==: --dhchap-ctrl-secret DHHC-1:01:MjZmNzVjYTgxYWU0MzQxYzViYmY4NmIyOTY1YWZlYja6QNRW: 00:15:45.835 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.835 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:15:45.835 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.835 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.835 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.835 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:45.835 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:45.835 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:46.093 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:15:46.093 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:46.093 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:46.093 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:46.093 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:46.093 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.093 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key3 00:15:46.093 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.093 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.093 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.093 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:46.093 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:46.093 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:46.660 00:15:46.660 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:46.660 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.660 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:46.935 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.935 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.935 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.935 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.935 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.935 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:46.935 { 00:15:46.935 "cntlid": 95, 00:15:46.935 "qid": 0, 00:15:46.935 "state": "enabled", 00:15:46.935 "thread": "nvmf_tgt_poll_group_000", 00:15:46.935 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:15:46.935 "listen_address": { 00:15:46.935 "trtype": "TCP", 00:15:46.935 "adrfam": "IPv4", 00:15:46.935 "traddr": "10.0.0.3", 00:15:46.935 "trsvcid": "4420" 00:15:46.935 }, 00:15:46.935 "peer_address": { 00:15:46.935 "trtype": "TCP", 00:15:46.935 "adrfam": "IPv4", 00:15:46.935 "traddr": "10.0.0.1", 00:15:46.935 "trsvcid": "39790" 00:15:46.935 }, 00:15:46.935 "auth": { 00:15:46.935 "state": "completed", 00:15:46.935 "digest": "sha384", 00:15:46.935 "dhgroup": "ffdhe8192" 00:15:46.935 } 00:15:46.935 } 00:15:46.935 ]' 00:15:46.935 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:46.935 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:46.935 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:46.935 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:46.935 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:46.935 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.935 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.935 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.194 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODVkNDM0YmQ2YTIxOWFmOTlmNGM3MjVlYzM1MzQwNmYyNjRmZjJiYjI3NzYxNTE0YTI1NTNlNGIwYTFmMjk5MDSC2qw=: 00:15:47.194 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:03:ODVkNDM0YmQ2YTIxOWFmOTlmNGM3MjVlYzM1MzQwNmYyNjRmZjJiYjI3NzYxNTE0YTI1NTNlNGIwYTFmMjk5MDSC2qw=: 00:15:47.759 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.759 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.759 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:15:47.759 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.759 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.759 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.759 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:47.759 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:47.759 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:47.759 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:47.759 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:48.017 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:15:48.017 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:48.017 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:48.017 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:48.017 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:48.017 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.017 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.017 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.017 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.017 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.017 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.017 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.017 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.274 00:15:48.274 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:48.274 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.274 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:48.532 09:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.532 09:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:48.532 09:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.532 09:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.532 09:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.532 09:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:48.532 { 00:15:48.532 "cntlid": 97, 00:15:48.532 "qid": 0, 00:15:48.532 "state": "enabled", 00:15:48.532 "thread": "nvmf_tgt_poll_group_000", 00:15:48.533 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:15:48.533 "listen_address": { 00:15:48.533 "trtype": "TCP", 00:15:48.533 "adrfam": "IPv4", 00:15:48.533 "traddr": "10.0.0.3", 00:15:48.533 "trsvcid": "4420" 00:15:48.533 }, 00:15:48.533 "peer_address": { 00:15:48.533 "trtype": "TCP", 00:15:48.533 "adrfam": "IPv4", 00:15:48.533 "traddr": "10.0.0.1", 00:15:48.533 "trsvcid": "39824" 00:15:48.533 }, 00:15:48.533 "auth": { 00:15:48.533 "state": "completed", 00:15:48.533 "digest": "sha512", 00:15:48.533 "dhgroup": "null" 00:15:48.533 } 00:15:48.533 } 00:15:48.533 ]' 00:15:48.533 09:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:48.533 09:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:48.533 09:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:48.533 09:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:48.533 09:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:48.533 09:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.533 09:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.533 09:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.790 09:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTNjZTZjZGRjMDU1OTcyNWI5MzNhNTM3NDU3NTUxMmRiZjVhMzRjZThmMWYwZDI1YyatcQ==: --dhchap-ctrl-secret DHHC-1:03:OWRmZDExMjkzODNlYWJhNGE4ZDczMWFhOGY4NmYxMDYyMDgyNDc1N2Q4ODc5NTIyZTcyY2U1YmY5YTdlMzFmY4f98es=: 00:15:48.790 09:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:00:MTNjZTZjZGRjMDU1OTcyNWI5MzNhNTM3NDU3NTUxMmRiZjVhMzRjZThmMWYwZDI1YyatcQ==: --dhchap-ctrl-secret DHHC-1:03:OWRmZDExMjkzODNlYWJhNGE4ZDczMWFhOGY4NmYxMDYyMDgyNDc1N2Q4ODc5NTIyZTcyY2U1YmY5YTdlMzFmY4f98es=: 00:15:49.357 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.357 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.357 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:15:49.357 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.357 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.357 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.357 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:49.357 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:49.357 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:49.616 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:15:49.616 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:49.616 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:49.616 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:49.616 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:49.616 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.616 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:49.616 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.616 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.616 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.616 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:49.616 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:49.616 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:49.875 00:15:49.875 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:49.875 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:49.875 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.133 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.133 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.133 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.133 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.133 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.133 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:50.133 { 00:15:50.133 "cntlid": 99, 00:15:50.133 "qid": 0, 00:15:50.133 "state": "enabled", 00:15:50.133 "thread": "nvmf_tgt_poll_group_000", 00:15:50.133 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:15:50.133 "listen_address": { 00:15:50.133 "trtype": "TCP", 00:15:50.133 "adrfam": "IPv4", 00:15:50.133 "traddr": "10.0.0.3", 00:15:50.133 "trsvcid": "4420" 00:15:50.133 }, 00:15:50.133 "peer_address": { 00:15:50.133 "trtype": "TCP", 00:15:50.133 "adrfam": "IPv4", 00:15:50.133 "traddr": "10.0.0.1", 00:15:50.133 "trsvcid": "39844" 00:15:50.133 }, 00:15:50.133 "auth": { 00:15:50.133 "state": "completed", 00:15:50.133 "digest": "sha512", 00:15:50.133 "dhgroup": "null" 00:15:50.133 } 00:15:50.133 } 00:15:50.133 ]' 00:15:50.133 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:50.133 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:50.133 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:50.392 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:50.392 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:50.392 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.392 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.392 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.651 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODUzZGYyNDNhMmFlNDgwYzJhMTQwYzQyNWFiZWQxZWSsSQKw: --dhchap-ctrl-secret DHHC-1:02:MDJiNTZlMTJjYWI0NzhhYjQ1Mjg3Y2IzZTBjZmQ4NDZhNzIxMTRmOWFmOTcyY2E4986qcg==: 00:15:50.651 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:01:ODUzZGYyNDNhMmFlNDgwYzJhMTQwYzQyNWFiZWQxZWSsSQKw: --dhchap-ctrl-secret DHHC-1:02:MDJiNTZlMTJjYWI0NzhhYjQ1Mjg3Y2IzZTBjZmQ4NDZhNzIxMTRmOWFmOTcyY2E4986qcg==: 00:15:51.217 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.217 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:15:51.217 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.217 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.217 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.217 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:51.217 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:51.217 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:51.477 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:15:51.477 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:51.477 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:51.477 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:51.477 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:51.477 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.477 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:51.477 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.477 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.477 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.477 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:51.477 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:51.477 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:51.736 00:15:51.736 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:51.736 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:51.736 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.736 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.736 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.736 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.736 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.994 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.994 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:51.994 { 00:15:51.994 "cntlid": 101, 00:15:51.994 "qid": 0, 00:15:51.994 "state": "enabled", 00:15:51.994 "thread": "nvmf_tgt_poll_group_000", 00:15:51.994 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:15:51.994 "listen_address": { 00:15:51.994 "trtype": "TCP", 00:15:51.994 "adrfam": "IPv4", 00:15:51.994 "traddr": "10.0.0.3", 00:15:51.994 "trsvcid": "4420" 00:15:51.994 }, 00:15:51.994 "peer_address": { 00:15:51.994 "trtype": "TCP", 00:15:51.994 "adrfam": "IPv4", 00:15:51.994 "traddr": "10.0.0.1", 00:15:51.994 "trsvcid": "39872" 00:15:51.994 }, 00:15:51.994 "auth": { 00:15:51.994 "state": "completed", 00:15:51.994 "digest": "sha512", 00:15:51.994 "dhgroup": "null" 00:15:51.994 } 00:15:51.994 } 00:15:51.994 ]' 00:15:51.994 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:51.994 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:51.994 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:51.994 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:51.994 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:51.994 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.994 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.994 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.253 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDk2MDQ0ZThjODY3YjcxZGRlMTViYTU4YjZmOWQwM2MxZDllMGUxYzY4OTQzMzg5CaUkbw==: --dhchap-ctrl-secret DHHC-1:01:MjZmNzVjYTgxYWU0MzQxYzViYmY4NmIyOTY1YWZlYja6QNRW: 00:15:52.253 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:02:NDk2MDQ0ZThjODY3YjcxZGRlMTViYTU4YjZmOWQwM2MxZDllMGUxYzY4OTQzMzg5CaUkbw==: --dhchap-ctrl-secret DHHC-1:01:MjZmNzVjYTgxYWU0MzQxYzViYmY4NmIyOTY1YWZlYja6QNRW: 00:15:52.821 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.821 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.821 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:15:52.821 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.821 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.821 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.821 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:52.821 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:52.821 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:53.080 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:15:53.080 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:53.080 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:53.080 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:53.080 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:53.080 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.080 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key3 00:15:53.080 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.080 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.080 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.080 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:53.080 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:53.080 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:53.340 00:15:53.340 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:53.340 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.340 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:53.598 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.598 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.598 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.598 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.598 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.598 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:53.598 { 00:15:53.598 "cntlid": 103, 00:15:53.598 "qid": 0, 00:15:53.598 "state": "enabled", 00:15:53.598 "thread": "nvmf_tgt_poll_group_000", 00:15:53.598 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:15:53.598 "listen_address": { 00:15:53.598 "trtype": "TCP", 00:15:53.598 "adrfam": "IPv4", 00:15:53.598 "traddr": "10.0.0.3", 00:15:53.598 "trsvcid": "4420" 00:15:53.598 }, 00:15:53.598 "peer_address": { 00:15:53.598 "trtype": "TCP", 00:15:53.598 "adrfam": "IPv4", 00:15:53.598 "traddr": "10.0.0.1", 00:15:53.598 "trsvcid": "35592" 00:15:53.598 }, 00:15:53.598 "auth": { 00:15:53.598 "state": "completed", 00:15:53.598 "digest": "sha512", 00:15:53.598 "dhgroup": "null" 00:15:53.598 } 00:15:53.598 } 00:15:53.598 ]' 00:15:53.598 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:53.598 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:53.598 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:53.598 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:53.598 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:53.598 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.598 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.599 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.857 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODVkNDM0YmQ2YTIxOWFmOTlmNGM3MjVlYzM1MzQwNmYyNjRmZjJiYjI3NzYxNTE0YTI1NTNlNGIwYTFmMjk5MDSC2qw=: 00:15:53.857 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:03:ODVkNDM0YmQ2YTIxOWFmOTlmNGM3MjVlYzM1MzQwNmYyNjRmZjJiYjI3NzYxNTE0YTI1NTNlNGIwYTFmMjk5MDSC2qw=: 00:15:54.424 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.424 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:15:54.424 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.424 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.424 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.424 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:54.424 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:54.424 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:54.424 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:54.682 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:15:54.682 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:54.682 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:54.682 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:54.682 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:54.683 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.683 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.683 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.683 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.683 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.683 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.683 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.683 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.941 00:15:54.941 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:54.941 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.941 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:55.199 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.199 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.199 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.199 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.199 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.199 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:55.199 { 00:15:55.199 "cntlid": 105, 00:15:55.199 "qid": 0, 00:15:55.199 "state": "enabled", 00:15:55.199 "thread": "nvmf_tgt_poll_group_000", 00:15:55.199 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:15:55.199 "listen_address": { 00:15:55.199 "trtype": "TCP", 00:15:55.199 "adrfam": "IPv4", 00:15:55.199 "traddr": "10.0.0.3", 00:15:55.199 "trsvcid": "4420" 00:15:55.199 }, 00:15:55.199 "peer_address": { 00:15:55.199 "trtype": "TCP", 00:15:55.199 "adrfam": "IPv4", 00:15:55.199 "traddr": "10.0.0.1", 00:15:55.199 "trsvcid": "35612" 00:15:55.199 }, 00:15:55.199 "auth": { 00:15:55.199 "state": "completed", 00:15:55.199 "digest": "sha512", 00:15:55.199 "dhgroup": "ffdhe2048" 00:15:55.199 } 00:15:55.199 } 00:15:55.199 ]' 00:15:55.199 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:55.199 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:55.199 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:55.199 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:55.199 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:55.457 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.457 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.457 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.457 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTNjZTZjZGRjMDU1OTcyNWI5MzNhNTM3NDU3NTUxMmRiZjVhMzRjZThmMWYwZDI1YyatcQ==: --dhchap-ctrl-secret DHHC-1:03:OWRmZDExMjkzODNlYWJhNGE4ZDczMWFhOGY4NmYxMDYyMDgyNDc1N2Q4ODc5NTIyZTcyY2U1YmY5YTdlMzFmY4f98es=: 00:15:55.457 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:00:MTNjZTZjZGRjMDU1OTcyNWI5MzNhNTM3NDU3NTUxMmRiZjVhMzRjZThmMWYwZDI1YyatcQ==: --dhchap-ctrl-secret DHHC-1:03:OWRmZDExMjkzODNlYWJhNGE4ZDczMWFhOGY4NmYxMDYyMDgyNDc1N2Q4ODc5NTIyZTcyY2U1YmY5YTdlMzFmY4f98es=: 00:15:56.022 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.280 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.280 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:15:56.280 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.280 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.280 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.280 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:56.280 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:56.280 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:56.280 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:15:56.280 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:56.280 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:56.280 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:56.280 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:56.280 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.280 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:56.280 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.280 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.280 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.280 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:56.280 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:56.280 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:56.537 00:15:56.796 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:56.796 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:56.796 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.796 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.796 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.796 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.796 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.796 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.796 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:56.796 { 00:15:56.796 "cntlid": 107, 00:15:56.796 "qid": 0, 00:15:56.796 "state": "enabled", 00:15:56.796 "thread": "nvmf_tgt_poll_group_000", 00:15:56.796 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:15:56.796 "listen_address": { 00:15:56.796 "trtype": "TCP", 00:15:56.796 "adrfam": "IPv4", 00:15:56.796 "traddr": "10.0.0.3", 00:15:56.796 "trsvcid": "4420" 00:15:56.796 }, 00:15:56.796 "peer_address": { 00:15:56.796 "trtype": "TCP", 00:15:56.796 "adrfam": "IPv4", 00:15:56.796 "traddr": "10.0.0.1", 00:15:56.796 "trsvcid": "35648" 00:15:56.796 }, 00:15:56.796 "auth": { 00:15:56.796 "state": "completed", 00:15:56.796 "digest": "sha512", 00:15:56.796 "dhgroup": "ffdhe2048" 00:15:56.796 } 00:15:56.796 } 00:15:56.796 ]' 00:15:56.796 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:57.054 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:57.054 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:57.054 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:57.054 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:57.054 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.054 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.054 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.311 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODUzZGYyNDNhMmFlNDgwYzJhMTQwYzQyNWFiZWQxZWSsSQKw: --dhchap-ctrl-secret DHHC-1:02:MDJiNTZlMTJjYWI0NzhhYjQ1Mjg3Y2IzZTBjZmQ4NDZhNzIxMTRmOWFmOTcyY2E4986qcg==: 00:15:57.311 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:01:ODUzZGYyNDNhMmFlNDgwYzJhMTQwYzQyNWFiZWQxZWSsSQKw: --dhchap-ctrl-secret DHHC-1:02:MDJiNTZlMTJjYWI0NzhhYjQ1Mjg3Y2IzZTBjZmQ4NDZhNzIxMTRmOWFmOTcyY2E4986qcg==: 00:15:57.877 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.877 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:15:57.877 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.877 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.877 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.877 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:57.877 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:57.877 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:58.135 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:15:58.135 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:58.135 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:58.135 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:58.135 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:58.135 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.135 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.135 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.135 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.135 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.135 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.135 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.136 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.393 00:15:58.393 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:58.393 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:58.393 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.651 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.651 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.651 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.651 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.651 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.651 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:58.651 { 00:15:58.651 "cntlid": 109, 00:15:58.651 "qid": 0, 00:15:58.651 "state": "enabled", 00:15:58.651 "thread": "nvmf_tgt_poll_group_000", 00:15:58.651 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:15:58.651 "listen_address": { 00:15:58.651 "trtype": "TCP", 00:15:58.651 "adrfam": "IPv4", 00:15:58.651 "traddr": "10.0.0.3", 00:15:58.651 "trsvcid": "4420" 00:15:58.651 }, 00:15:58.651 "peer_address": { 00:15:58.651 "trtype": "TCP", 00:15:58.651 "adrfam": "IPv4", 00:15:58.651 "traddr": "10.0.0.1", 00:15:58.651 "trsvcid": "35664" 00:15:58.651 }, 00:15:58.651 "auth": { 00:15:58.651 "state": "completed", 00:15:58.651 "digest": "sha512", 00:15:58.651 "dhgroup": "ffdhe2048" 00:15:58.651 } 00:15:58.651 } 00:15:58.651 ]' 00:15:58.651 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:58.651 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:58.651 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:58.651 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:58.651 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:58.651 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.651 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.651 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.909 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDk2MDQ0ZThjODY3YjcxZGRlMTViYTU4YjZmOWQwM2MxZDllMGUxYzY4OTQzMzg5CaUkbw==: --dhchap-ctrl-secret DHHC-1:01:MjZmNzVjYTgxYWU0MzQxYzViYmY4NmIyOTY1YWZlYja6QNRW: 00:15:58.909 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:02:NDk2MDQ0ZThjODY3YjcxZGRlMTViYTU4YjZmOWQwM2MxZDllMGUxYzY4OTQzMzg5CaUkbw==: --dhchap-ctrl-secret DHHC-1:01:MjZmNzVjYTgxYWU0MzQxYzViYmY4NmIyOTY1YWZlYja6QNRW: 00:15:59.476 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.476 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.476 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:15:59.476 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.476 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.476 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.476 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:59.476 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:59.476 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:59.734 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:15:59.734 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:59.734 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:59.734 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:59.734 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:59.734 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.734 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key3 00:15:59.734 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.734 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.734 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.734 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:59.734 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:59.734 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:59.992 00:15:59.992 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:59.992 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.992 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:00.249 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.249 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.249 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.249 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.249 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.249 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:00.249 { 00:16:00.249 "cntlid": 111, 00:16:00.249 "qid": 0, 00:16:00.249 "state": "enabled", 00:16:00.249 "thread": "nvmf_tgt_poll_group_000", 00:16:00.249 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:16:00.249 "listen_address": { 00:16:00.249 "trtype": "TCP", 00:16:00.249 "adrfam": "IPv4", 00:16:00.249 "traddr": "10.0.0.3", 00:16:00.249 "trsvcid": "4420" 00:16:00.249 }, 00:16:00.249 "peer_address": { 00:16:00.249 "trtype": "TCP", 00:16:00.249 "adrfam": "IPv4", 00:16:00.249 "traddr": "10.0.0.1", 00:16:00.249 "trsvcid": "35696" 00:16:00.249 }, 00:16:00.249 "auth": { 00:16:00.249 "state": "completed", 00:16:00.249 "digest": "sha512", 00:16:00.249 "dhgroup": "ffdhe2048" 00:16:00.249 } 00:16:00.249 } 00:16:00.249 ]' 00:16:00.249 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:00.249 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:00.249 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:00.249 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:00.249 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:00.506 09:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.506 09:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.506 09:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.785 09:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODVkNDM0YmQ2YTIxOWFmOTlmNGM3MjVlYzM1MzQwNmYyNjRmZjJiYjI3NzYxNTE0YTI1NTNlNGIwYTFmMjk5MDSC2qw=: 00:16:00.785 09:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:03:ODVkNDM0YmQ2YTIxOWFmOTlmNGM3MjVlYzM1MzQwNmYyNjRmZjJiYjI3NzYxNTE0YTI1NTNlNGIwYTFmMjk5MDSC2qw=: 00:16:01.351 09:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.351 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.351 09:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:16:01.351 09:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.351 09:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.351 09:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.351 09:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:01.351 09:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:01.351 09:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:01.351 09:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:01.351 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:16:01.351 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:01.351 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:01.351 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:01.351 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:01.351 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.351 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.351 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.351 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.351 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.351 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.351 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.351 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.915 00:16:01.915 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:01.915 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.915 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:01.915 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.915 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.915 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.915 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.915 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.915 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:01.915 { 00:16:01.915 "cntlid": 113, 00:16:01.915 "qid": 0, 00:16:01.915 "state": "enabled", 00:16:01.915 "thread": "nvmf_tgt_poll_group_000", 00:16:01.915 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:16:01.915 "listen_address": { 00:16:01.915 "trtype": "TCP", 00:16:01.915 "adrfam": "IPv4", 00:16:01.915 "traddr": "10.0.0.3", 00:16:01.915 "trsvcid": "4420" 00:16:01.915 }, 00:16:01.915 "peer_address": { 00:16:01.915 "trtype": "TCP", 00:16:01.915 "adrfam": "IPv4", 00:16:01.915 "traddr": "10.0.0.1", 00:16:01.915 "trsvcid": "35724" 00:16:01.915 }, 00:16:01.915 "auth": { 00:16:01.915 "state": "completed", 00:16:01.915 "digest": "sha512", 00:16:01.915 "dhgroup": "ffdhe3072" 00:16:01.915 } 00:16:01.915 } 00:16:01.915 ]' 00:16:01.915 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:02.173 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:02.173 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:02.173 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:02.173 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:02.173 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.173 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.173 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.432 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTNjZTZjZGRjMDU1OTcyNWI5MzNhNTM3NDU3NTUxMmRiZjVhMzRjZThmMWYwZDI1YyatcQ==: --dhchap-ctrl-secret DHHC-1:03:OWRmZDExMjkzODNlYWJhNGE4ZDczMWFhOGY4NmYxMDYyMDgyNDc1N2Q4ODc5NTIyZTcyY2U1YmY5YTdlMzFmY4f98es=: 00:16:02.432 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:00:MTNjZTZjZGRjMDU1OTcyNWI5MzNhNTM3NDU3NTUxMmRiZjVhMzRjZThmMWYwZDI1YyatcQ==: --dhchap-ctrl-secret DHHC-1:03:OWRmZDExMjkzODNlYWJhNGE4ZDczMWFhOGY4NmYxMDYyMDgyNDc1N2Q4ODc5NTIyZTcyY2U1YmY5YTdlMzFmY4f98es=: 00:16:03.031 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.031 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.031 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:16:03.031 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.031 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.031 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.031 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:03.031 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:03.031 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:03.289 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:16:03.289 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:03.289 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:03.289 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:03.289 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:03.289 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.289 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.289 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.289 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.289 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.289 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.289 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.289 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.546 00:16:03.546 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:03.546 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:03.546 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.805 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.805 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.805 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.805 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.805 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.805 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:03.805 { 00:16:03.805 "cntlid": 115, 00:16:03.805 "qid": 0, 00:16:03.805 "state": "enabled", 00:16:03.805 "thread": "nvmf_tgt_poll_group_000", 00:16:03.805 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:16:03.805 "listen_address": { 00:16:03.805 "trtype": "TCP", 00:16:03.805 "adrfam": "IPv4", 00:16:03.805 "traddr": "10.0.0.3", 00:16:03.805 "trsvcid": "4420" 00:16:03.805 }, 00:16:03.805 "peer_address": { 00:16:03.805 "trtype": "TCP", 00:16:03.805 "adrfam": "IPv4", 00:16:03.805 "traddr": "10.0.0.1", 00:16:03.805 "trsvcid": "44478" 00:16:03.805 }, 00:16:03.805 "auth": { 00:16:03.805 "state": "completed", 00:16:03.805 "digest": "sha512", 00:16:03.805 "dhgroup": "ffdhe3072" 00:16:03.805 } 00:16:03.805 } 00:16:03.805 ]' 00:16:03.805 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:03.805 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:03.805 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:03.805 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:03.805 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:03.805 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.805 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.805 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.064 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODUzZGYyNDNhMmFlNDgwYzJhMTQwYzQyNWFiZWQxZWSsSQKw: --dhchap-ctrl-secret DHHC-1:02:MDJiNTZlMTJjYWI0NzhhYjQ1Mjg3Y2IzZTBjZmQ4NDZhNzIxMTRmOWFmOTcyY2E4986qcg==: 00:16:04.064 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:01:ODUzZGYyNDNhMmFlNDgwYzJhMTQwYzQyNWFiZWQxZWSsSQKw: --dhchap-ctrl-secret DHHC-1:02:MDJiNTZlMTJjYWI0NzhhYjQ1Mjg3Y2IzZTBjZmQ4NDZhNzIxMTRmOWFmOTcyY2E4986qcg==: 00:16:04.631 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.631 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.631 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:16:04.631 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.631 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.631 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.631 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:04.631 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:04.631 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:04.890 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:16:04.890 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:04.890 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:04.890 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:04.890 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:04.890 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.890 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.890 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.890 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.890 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.890 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.890 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.890 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.149 00:16:05.408 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:05.408 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:05.408 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.408 09:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.408 09:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.408 09:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.408 09:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.408 09:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.408 09:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:05.408 { 00:16:05.408 "cntlid": 117, 00:16:05.408 "qid": 0, 00:16:05.408 "state": "enabled", 00:16:05.408 "thread": "nvmf_tgt_poll_group_000", 00:16:05.408 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:16:05.408 "listen_address": { 00:16:05.408 "trtype": "TCP", 00:16:05.408 "adrfam": "IPv4", 00:16:05.408 "traddr": "10.0.0.3", 00:16:05.408 "trsvcid": "4420" 00:16:05.408 }, 00:16:05.408 "peer_address": { 00:16:05.408 "trtype": "TCP", 00:16:05.408 "adrfam": "IPv4", 00:16:05.408 "traddr": "10.0.0.1", 00:16:05.408 "trsvcid": "44506" 00:16:05.408 }, 00:16:05.408 "auth": { 00:16:05.408 "state": "completed", 00:16:05.408 "digest": "sha512", 00:16:05.408 "dhgroup": "ffdhe3072" 00:16:05.408 } 00:16:05.408 } 00:16:05.408 ]' 00:16:05.408 09:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:05.668 09:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:05.668 09:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:05.668 09:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:05.668 09:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:05.668 09:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.668 09:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.668 09:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.928 09:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDk2MDQ0ZThjODY3YjcxZGRlMTViYTU4YjZmOWQwM2MxZDllMGUxYzY4OTQzMzg5CaUkbw==: --dhchap-ctrl-secret DHHC-1:01:MjZmNzVjYTgxYWU0MzQxYzViYmY4NmIyOTY1YWZlYja6QNRW: 00:16:05.928 09:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:02:NDk2MDQ0ZThjODY3YjcxZGRlMTViYTU4YjZmOWQwM2MxZDllMGUxYzY4OTQzMzg5CaUkbw==: --dhchap-ctrl-secret DHHC-1:01:MjZmNzVjYTgxYWU0MzQxYzViYmY4NmIyOTY1YWZlYja6QNRW: 00:16:06.496 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.496 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:16:06.496 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.496 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.496 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.496 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:06.496 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:06.496 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:06.754 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:16:06.754 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:06.754 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:06.755 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:06.755 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:06.755 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.755 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key3 00:16:06.755 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.755 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.755 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.755 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:06.755 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:06.755 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:07.013 00:16:07.013 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:07.013 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:07.013 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.272 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.272 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.272 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.272 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.272 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.272 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:07.272 { 00:16:07.272 "cntlid": 119, 00:16:07.272 "qid": 0, 00:16:07.272 "state": "enabled", 00:16:07.272 "thread": "nvmf_tgt_poll_group_000", 00:16:07.272 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:16:07.272 "listen_address": { 00:16:07.272 "trtype": "TCP", 00:16:07.272 "adrfam": "IPv4", 00:16:07.272 "traddr": "10.0.0.3", 00:16:07.273 "trsvcid": "4420" 00:16:07.273 }, 00:16:07.273 "peer_address": { 00:16:07.273 "trtype": "TCP", 00:16:07.273 "adrfam": "IPv4", 00:16:07.273 "traddr": "10.0.0.1", 00:16:07.273 "trsvcid": "44526" 00:16:07.273 }, 00:16:07.273 "auth": { 00:16:07.273 "state": "completed", 00:16:07.273 "digest": "sha512", 00:16:07.273 "dhgroup": "ffdhe3072" 00:16:07.273 } 00:16:07.273 } 00:16:07.273 ]' 00:16:07.273 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:07.273 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:07.273 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:07.273 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:07.273 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:07.273 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.273 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.273 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.531 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODVkNDM0YmQ2YTIxOWFmOTlmNGM3MjVlYzM1MzQwNmYyNjRmZjJiYjI3NzYxNTE0YTI1NTNlNGIwYTFmMjk5MDSC2qw=: 00:16:07.531 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:03:ODVkNDM0YmQ2YTIxOWFmOTlmNGM3MjVlYzM1MzQwNmYyNjRmZjJiYjI3NzYxNTE0YTI1NTNlNGIwYTFmMjk5MDSC2qw=: 00:16:08.098 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.098 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.098 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:16:08.098 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.098 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.098 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.098 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:08.098 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:08.098 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:08.098 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:08.357 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:16:08.357 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:08.357 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:08.357 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:08.357 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:08.357 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.357 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.357 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.357 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.357 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.357 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.357 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.357 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.617 00:16:08.617 09:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:08.617 09:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:08.617 09:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.876 09:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.876 09:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.876 09:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.876 09:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.876 09:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.876 09:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:08.876 { 00:16:08.876 "cntlid": 121, 00:16:08.876 "qid": 0, 00:16:08.876 "state": "enabled", 00:16:08.876 "thread": "nvmf_tgt_poll_group_000", 00:16:08.876 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:16:08.876 "listen_address": { 00:16:08.876 "trtype": "TCP", 00:16:08.876 "adrfam": "IPv4", 00:16:08.876 "traddr": "10.0.0.3", 00:16:08.876 "trsvcid": "4420" 00:16:08.876 }, 00:16:08.876 "peer_address": { 00:16:08.876 "trtype": "TCP", 00:16:08.876 "adrfam": "IPv4", 00:16:08.876 "traddr": "10.0.0.1", 00:16:08.876 "trsvcid": "44550" 00:16:08.876 }, 00:16:08.876 "auth": { 00:16:08.876 "state": "completed", 00:16:08.876 "digest": "sha512", 00:16:08.876 "dhgroup": "ffdhe4096" 00:16:08.876 } 00:16:08.876 } 00:16:08.876 ]' 00:16:08.876 09:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:08.876 09:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:08.876 09:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:09.136 09:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:09.136 09:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:09.136 09:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.136 09:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.136 09:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.396 09:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTNjZTZjZGRjMDU1OTcyNWI5MzNhNTM3NDU3NTUxMmRiZjVhMzRjZThmMWYwZDI1YyatcQ==: --dhchap-ctrl-secret DHHC-1:03:OWRmZDExMjkzODNlYWJhNGE4ZDczMWFhOGY4NmYxMDYyMDgyNDc1N2Q4ODc5NTIyZTcyY2U1YmY5YTdlMzFmY4f98es=: 00:16:09.396 09:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:00:MTNjZTZjZGRjMDU1OTcyNWI5MzNhNTM3NDU3NTUxMmRiZjVhMzRjZThmMWYwZDI1YyatcQ==: --dhchap-ctrl-secret DHHC-1:03:OWRmZDExMjkzODNlYWJhNGE4ZDczMWFhOGY4NmYxMDYyMDgyNDc1N2Q4ODc5NTIyZTcyY2U1YmY5YTdlMzFmY4f98es=: 00:16:09.963 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.963 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.963 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:16:09.963 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.963 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.963 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.963 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:09.963 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:09.963 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:10.260 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:16:10.260 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:10.260 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:10.260 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:10.260 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:10.260 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.260 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.260 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.260 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.260 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.260 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.260 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.260 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.556 00:16:10.556 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:10.557 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:10.557 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.557 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.557 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.557 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.557 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.815 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.815 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:10.815 { 00:16:10.815 "cntlid": 123, 00:16:10.815 "qid": 0, 00:16:10.815 "state": "enabled", 00:16:10.815 "thread": "nvmf_tgt_poll_group_000", 00:16:10.815 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:16:10.815 "listen_address": { 00:16:10.815 "trtype": "TCP", 00:16:10.815 "adrfam": "IPv4", 00:16:10.815 "traddr": "10.0.0.3", 00:16:10.815 "trsvcid": "4420" 00:16:10.815 }, 00:16:10.815 "peer_address": { 00:16:10.815 "trtype": "TCP", 00:16:10.815 "adrfam": "IPv4", 00:16:10.815 "traddr": "10.0.0.1", 00:16:10.815 "trsvcid": "44576" 00:16:10.815 }, 00:16:10.815 "auth": { 00:16:10.815 "state": "completed", 00:16:10.815 "digest": "sha512", 00:16:10.815 "dhgroup": "ffdhe4096" 00:16:10.815 } 00:16:10.815 } 00:16:10.815 ]' 00:16:10.815 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.815 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:10.815 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.815 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:10.815 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:10.815 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.815 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.815 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.073 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODUzZGYyNDNhMmFlNDgwYzJhMTQwYzQyNWFiZWQxZWSsSQKw: --dhchap-ctrl-secret DHHC-1:02:MDJiNTZlMTJjYWI0NzhhYjQ1Mjg3Y2IzZTBjZmQ4NDZhNzIxMTRmOWFmOTcyY2E4986qcg==: 00:16:11.074 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:01:ODUzZGYyNDNhMmFlNDgwYzJhMTQwYzQyNWFiZWQxZWSsSQKw: --dhchap-ctrl-secret DHHC-1:02:MDJiNTZlMTJjYWI0NzhhYjQ1Mjg3Y2IzZTBjZmQ4NDZhNzIxMTRmOWFmOTcyY2E4986qcg==: 00:16:11.638 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.638 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.638 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:16:11.638 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.638 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.638 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.638 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.638 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:11.638 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:11.897 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:16:11.897 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:11.897 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:11.897 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:11.897 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:11.897 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.897 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.897 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.897 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.897 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.897 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.897 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.897 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.155 00:16:12.155 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:12.155 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.155 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:12.413 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.413 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.413 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.413 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.413 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.413 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:12.413 { 00:16:12.413 "cntlid": 125, 00:16:12.413 "qid": 0, 00:16:12.413 "state": "enabled", 00:16:12.413 "thread": "nvmf_tgt_poll_group_000", 00:16:12.413 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:16:12.413 "listen_address": { 00:16:12.413 "trtype": "TCP", 00:16:12.413 "adrfam": "IPv4", 00:16:12.413 "traddr": "10.0.0.3", 00:16:12.413 "trsvcid": "4420" 00:16:12.413 }, 00:16:12.413 "peer_address": { 00:16:12.413 "trtype": "TCP", 00:16:12.413 "adrfam": "IPv4", 00:16:12.413 "traddr": "10.0.0.1", 00:16:12.413 "trsvcid": "43878" 00:16:12.413 }, 00:16:12.413 "auth": { 00:16:12.413 "state": "completed", 00:16:12.413 "digest": "sha512", 00:16:12.413 "dhgroup": "ffdhe4096" 00:16:12.413 } 00:16:12.413 } 00:16:12.413 ]' 00:16:12.413 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.413 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:12.413 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.413 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:12.413 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.413 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.413 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.413 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.670 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDk2MDQ0ZThjODY3YjcxZGRlMTViYTU4YjZmOWQwM2MxZDllMGUxYzY4OTQzMzg5CaUkbw==: --dhchap-ctrl-secret DHHC-1:01:MjZmNzVjYTgxYWU0MzQxYzViYmY4NmIyOTY1YWZlYja6QNRW: 00:16:12.670 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:02:NDk2MDQ0ZThjODY3YjcxZGRlMTViYTU4YjZmOWQwM2MxZDllMGUxYzY4OTQzMzg5CaUkbw==: --dhchap-ctrl-secret DHHC-1:01:MjZmNzVjYTgxYWU0MzQxYzViYmY4NmIyOTY1YWZlYja6QNRW: 00:16:13.237 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.237 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.237 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:16:13.237 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.237 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.237 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.237 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:13.237 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:13.237 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:13.495 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:16:13.495 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:13.495 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:13.495 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:13.495 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:13.495 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.495 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key3 00:16:13.495 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.495 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.495 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.495 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:13.495 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:13.495 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:13.753 00:16:14.012 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:14.012 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.012 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:14.012 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.012 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.012 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.012 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.012 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.012 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:14.012 { 00:16:14.012 "cntlid": 127, 00:16:14.012 "qid": 0, 00:16:14.012 "state": "enabled", 00:16:14.012 "thread": "nvmf_tgt_poll_group_000", 00:16:14.012 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:16:14.012 "listen_address": { 00:16:14.012 "trtype": "TCP", 00:16:14.012 "adrfam": "IPv4", 00:16:14.012 "traddr": "10.0.0.3", 00:16:14.012 "trsvcid": "4420" 00:16:14.012 }, 00:16:14.012 "peer_address": { 00:16:14.012 "trtype": "TCP", 00:16:14.012 "adrfam": "IPv4", 00:16:14.012 "traddr": "10.0.0.1", 00:16:14.012 "trsvcid": "43910" 00:16:14.012 }, 00:16:14.012 "auth": { 00:16:14.012 "state": "completed", 00:16:14.012 "digest": "sha512", 00:16:14.012 "dhgroup": "ffdhe4096" 00:16:14.012 } 00:16:14.012 } 00:16:14.012 ]' 00:16:14.012 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:14.269 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:14.269 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:14.269 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:14.269 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:14.269 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.269 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.269 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.526 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODVkNDM0YmQ2YTIxOWFmOTlmNGM3MjVlYzM1MzQwNmYyNjRmZjJiYjI3NzYxNTE0YTI1NTNlNGIwYTFmMjk5MDSC2qw=: 00:16:14.526 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:03:ODVkNDM0YmQ2YTIxOWFmOTlmNGM3MjVlYzM1MzQwNmYyNjRmZjJiYjI3NzYxNTE0YTI1NTNlNGIwYTFmMjk5MDSC2qw=: 00:16:15.092 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.092 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.092 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:16:15.092 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.092 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.092 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.092 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:15.092 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:15.092 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:15.092 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:15.367 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:16:15.367 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:15.367 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:15.367 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:15.367 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:15.367 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.367 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.367 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.367 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.367 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.367 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.367 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.367 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.626 00:16:15.626 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:15.626 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:15.626 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.887 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.887 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.888 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.888 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.888 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.888 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:15.888 { 00:16:15.888 "cntlid": 129, 00:16:15.888 "qid": 0, 00:16:15.888 "state": "enabled", 00:16:15.888 "thread": "nvmf_tgt_poll_group_000", 00:16:15.888 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:16:15.888 "listen_address": { 00:16:15.888 "trtype": "TCP", 00:16:15.888 "adrfam": "IPv4", 00:16:15.888 "traddr": "10.0.0.3", 00:16:15.888 "trsvcid": "4420" 00:16:15.888 }, 00:16:15.888 "peer_address": { 00:16:15.888 "trtype": "TCP", 00:16:15.888 "adrfam": "IPv4", 00:16:15.888 "traddr": "10.0.0.1", 00:16:15.888 "trsvcid": "43930" 00:16:15.888 }, 00:16:15.888 "auth": { 00:16:15.888 "state": "completed", 00:16:15.888 "digest": "sha512", 00:16:15.888 "dhgroup": "ffdhe6144" 00:16:15.888 } 00:16:15.888 } 00:16:15.888 ]' 00:16:15.888 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:15.888 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:15.888 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:15.888 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:15.888 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:15.888 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.888 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.888 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.147 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTNjZTZjZGRjMDU1OTcyNWI5MzNhNTM3NDU3NTUxMmRiZjVhMzRjZThmMWYwZDI1YyatcQ==: --dhchap-ctrl-secret DHHC-1:03:OWRmZDExMjkzODNlYWJhNGE4ZDczMWFhOGY4NmYxMDYyMDgyNDc1N2Q4ODc5NTIyZTcyY2U1YmY5YTdlMzFmY4f98es=: 00:16:16.147 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:00:MTNjZTZjZGRjMDU1OTcyNWI5MzNhNTM3NDU3NTUxMmRiZjVhMzRjZThmMWYwZDI1YyatcQ==: --dhchap-ctrl-secret DHHC-1:03:OWRmZDExMjkzODNlYWJhNGE4ZDczMWFhOGY4NmYxMDYyMDgyNDc1N2Q4ODc5NTIyZTcyY2U1YmY5YTdlMzFmY4f98es=: 00:16:16.714 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.714 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:16:16.714 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.714 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.714 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.714 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:16.714 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:16.714 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:16.973 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:16:16.973 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:16.973 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:16.973 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:16.973 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:16.973 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.973 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.973 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.973 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.973 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.973 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.973 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.973 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.650 00:16:17.650 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:17.650 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.650 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:17.650 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.650 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.650 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.650 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.650 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.650 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:17.650 { 00:16:17.650 "cntlid": 131, 00:16:17.650 "qid": 0, 00:16:17.650 "state": "enabled", 00:16:17.650 "thread": "nvmf_tgt_poll_group_000", 00:16:17.650 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:16:17.650 "listen_address": { 00:16:17.650 "trtype": "TCP", 00:16:17.650 "adrfam": "IPv4", 00:16:17.650 "traddr": "10.0.0.3", 00:16:17.650 "trsvcid": "4420" 00:16:17.650 }, 00:16:17.650 "peer_address": { 00:16:17.650 "trtype": "TCP", 00:16:17.650 "adrfam": "IPv4", 00:16:17.650 "traddr": "10.0.0.1", 00:16:17.650 "trsvcid": "43944" 00:16:17.650 }, 00:16:17.650 "auth": { 00:16:17.650 "state": "completed", 00:16:17.650 "digest": "sha512", 00:16:17.650 "dhgroup": "ffdhe6144" 00:16:17.650 } 00:16:17.650 } 00:16:17.650 ]' 00:16:17.650 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:17.650 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:17.650 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:17.650 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:17.650 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:17.650 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.650 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.650 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.908 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODUzZGYyNDNhMmFlNDgwYzJhMTQwYzQyNWFiZWQxZWSsSQKw: --dhchap-ctrl-secret DHHC-1:02:MDJiNTZlMTJjYWI0NzhhYjQ1Mjg3Y2IzZTBjZmQ4NDZhNzIxMTRmOWFmOTcyY2E4986qcg==: 00:16:17.908 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:01:ODUzZGYyNDNhMmFlNDgwYzJhMTQwYzQyNWFiZWQxZWSsSQKw: --dhchap-ctrl-secret DHHC-1:02:MDJiNTZlMTJjYWI0NzhhYjQ1Mjg3Y2IzZTBjZmQ4NDZhNzIxMTRmOWFmOTcyY2E4986qcg==: 00:16:18.475 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.475 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.475 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:16:18.475 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.475 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.475 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.475 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:18.475 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:18.475 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:18.734 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:16:18.734 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.734 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:18.734 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:18.734 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:18.734 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.734 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.734 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.734 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.734 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.734 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.734 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.734 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.320 00:16:19.320 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:19.320 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:19.320 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.320 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.320 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.320 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.320 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.320 09:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.320 09:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:19.320 { 00:16:19.320 "cntlid": 133, 00:16:19.320 "qid": 0, 00:16:19.320 "state": "enabled", 00:16:19.320 "thread": "nvmf_tgt_poll_group_000", 00:16:19.320 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:16:19.320 "listen_address": { 00:16:19.320 "trtype": "TCP", 00:16:19.320 "adrfam": "IPv4", 00:16:19.320 "traddr": "10.0.0.3", 00:16:19.320 "trsvcid": "4420" 00:16:19.320 }, 00:16:19.320 "peer_address": { 00:16:19.320 "trtype": "TCP", 00:16:19.320 "adrfam": "IPv4", 00:16:19.320 "traddr": "10.0.0.1", 00:16:19.320 "trsvcid": "43968" 00:16:19.320 }, 00:16:19.320 "auth": { 00:16:19.320 "state": "completed", 00:16:19.320 "digest": "sha512", 00:16:19.320 "dhgroup": "ffdhe6144" 00:16:19.320 } 00:16:19.320 } 00:16:19.320 ]' 00:16:19.320 09:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:19.578 09:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:19.578 09:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:19.578 09:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:19.578 09:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:19.578 09:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.578 09:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.578 09:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.835 09:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDk2MDQ0ZThjODY3YjcxZGRlMTViYTU4YjZmOWQwM2MxZDllMGUxYzY4OTQzMzg5CaUkbw==: --dhchap-ctrl-secret DHHC-1:01:MjZmNzVjYTgxYWU0MzQxYzViYmY4NmIyOTY1YWZlYja6QNRW: 00:16:19.835 09:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:02:NDk2MDQ0ZThjODY3YjcxZGRlMTViYTU4YjZmOWQwM2MxZDllMGUxYzY4OTQzMzg5CaUkbw==: --dhchap-ctrl-secret DHHC-1:01:MjZmNzVjYTgxYWU0MzQxYzViYmY4NmIyOTY1YWZlYja6QNRW: 00:16:20.400 09:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.400 09:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:16:20.400 09:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.400 09:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.400 09:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.400 09:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:20.400 09:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:20.400 09:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:20.658 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:16:20.658 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:20.658 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:20.658 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:20.658 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:20.658 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.658 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key3 00:16:20.658 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.658 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.658 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.658 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:20.658 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:20.658 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:20.915 00:16:20.915 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.915 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.915 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:21.173 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.173 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.173 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.173 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.173 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.173 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:21.173 { 00:16:21.173 "cntlid": 135, 00:16:21.173 "qid": 0, 00:16:21.173 "state": "enabled", 00:16:21.173 "thread": "nvmf_tgt_poll_group_000", 00:16:21.173 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:16:21.173 "listen_address": { 00:16:21.173 "trtype": "TCP", 00:16:21.173 "adrfam": "IPv4", 00:16:21.173 "traddr": "10.0.0.3", 00:16:21.173 "trsvcid": "4420" 00:16:21.173 }, 00:16:21.173 "peer_address": { 00:16:21.173 "trtype": "TCP", 00:16:21.173 "adrfam": "IPv4", 00:16:21.173 "traddr": "10.0.0.1", 00:16:21.173 "trsvcid": "43998" 00:16:21.173 }, 00:16:21.173 "auth": { 00:16:21.173 "state": "completed", 00:16:21.173 "digest": "sha512", 00:16:21.173 "dhgroup": "ffdhe6144" 00:16:21.173 } 00:16:21.173 } 00:16:21.173 ]' 00:16:21.173 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:21.173 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:21.173 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.173 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:21.173 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.173 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.173 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.173 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.431 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODVkNDM0YmQ2YTIxOWFmOTlmNGM3MjVlYzM1MzQwNmYyNjRmZjJiYjI3NzYxNTE0YTI1NTNlNGIwYTFmMjk5MDSC2qw=: 00:16:21.431 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:03:ODVkNDM0YmQ2YTIxOWFmOTlmNGM3MjVlYzM1MzQwNmYyNjRmZjJiYjI3NzYxNTE0YTI1NTNlNGIwYTFmMjk5MDSC2qw=: 00:16:21.995 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.995 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.996 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:16:21.996 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.996 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.996 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.996 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:21.996 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:21.996 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:21.996 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:22.254 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:16:22.254 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:22.254 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:22.254 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:22.254 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:22.254 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.254 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.254 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.254 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.254 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.254 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.254 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.254 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.820 00:16:22.820 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:22.820 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:22.820 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.386 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.386 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.386 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.386 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.386 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.386 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.386 { 00:16:23.386 "cntlid": 137, 00:16:23.386 "qid": 0, 00:16:23.386 "state": "enabled", 00:16:23.386 "thread": "nvmf_tgt_poll_group_000", 00:16:23.386 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:16:23.386 "listen_address": { 00:16:23.386 "trtype": "TCP", 00:16:23.386 "adrfam": "IPv4", 00:16:23.386 "traddr": "10.0.0.3", 00:16:23.386 "trsvcid": "4420" 00:16:23.386 }, 00:16:23.386 "peer_address": { 00:16:23.386 "trtype": "TCP", 00:16:23.386 "adrfam": "IPv4", 00:16:23.386 "traddr": "10.0.0.1", 00:16:23.386 "trsvcid": "56446" 00:16:23.386 }, 00:16:23.386 "auth": { 00:16:23.386 "state": "completed", 00:16:23.386 "digest": "sha512", 00:16:23.386 "dhgroup": "ffdhe8192" 00:16:23.386 } 00:16:23.386 } 00:16:23.386 ]' 00:16:23.386 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.386 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:23.386 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.386 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:23.386 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.387 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.387 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.387 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.645 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTNjZTZjZGRjMDU1OTcyNWI5MzNhNTM3NDU3NTUxMmRiZjVhMzRjZThmMWYwZDI1YyatcQ==: --dhchap-ctrl-secret DHHC-1:03:OWRmZDExMjkzODNlYWJhNGE4ZDczMWFhOGY4NmYxMDYyMDgyNDc1N2Q4ODc5NTIyZTcyY2U1YmY5YTdlMzFmY4f98es=: 00:16:23.645 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:00:MTNjZTZjZGRjMDU1OTcyNWI5MzNhNTM3NDU3NTUxMmRiZjVhMzRjZThmMWYwZDI1YyatcQ==: --dhchap-ctrl-secret DHHC-1:03:OWRmZDExMjkzODNlYWJhNGE4ZDczMWFhOGY4NmYxMDYyMDgyNDc1N2Q4ODc5NTIyZTcyY2U1YmY5YTdlMzFmY4f98es=: 00:16:24.214 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.214 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.214 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:16:24.214 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.214 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.214 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.214 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.214 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:24.214 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:24.214 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:16:24.214 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:24.214 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:24.214 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:24.214 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:24.214 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.214 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.214 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.214 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.474 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.474 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.474 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.474 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.042 00:16:25.042 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.042 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.042 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.301 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.301 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.301 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.301 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.301 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.301 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.301 { 00:16:25.301 "cntlid": 139, 00:16:25.301 "qid": 0, 00:16:25.301 "state": "enabled", 00:16:25.301 "thread": "nvmf_tgt_poll_group_000", 00:16:25.301 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:16:25.301 "listen_address": { 00:16:25.301 "trtype": "TCP", 00:16:25.301 "adrfam": "IPv4", 00:16:25.301 "traddr": "10.0.0.3", 00:16:25.301 "trsvcid": "4420" 00:16:25.301 }, 00:16:25.301 "peer_address": { 00:16:25.301 "trtype": "TCP", 00:16:25.301 "adrfam": "IPv4", 00:16:25.301 "traddr": "10.0.0.1", 00:16:25.301 "trsvcid": "56472" 00:16:25.301 }, 00:16:25.301 "auth": { 00:16:25.301 "state": "completed", 00:16:25.301 "digest": "sha512", 00:16:25.301 "dhgroup": "ffdhe8192" 00:16:25.301 } 00:16:25.301 } 00:16:25.301 ]' 00:16:25.301 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.301 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:25.301 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.301 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:25.301 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.301 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.301 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.301 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.560 09:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODUzZGYyNDNhMmFlNDgwYzJhMTQwYzQyNWFiZWQxZWSsSQKw: --dhchap-ctrl-secret DHHC-1:02:MDJiNTZlMTJjYWI0NzhhYjQ1Mjg3Y2IzZTBjZmQ4NDZhNzIxMTRmOWFmOTcyY2E4986qcg==: 00:16:25.560 09:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:01:ODUzZGYyNDNhMmFlNDgwYzJhMTQwYzQyNWFiZWQxZWSsSQKw: --dhchap-ctrl-secret DHHC-1:02:MDJiNTZlMTJjYWI0NzhhYjQ1Mjg3Y2IzZTBjZmQ4NDZhNzIxMTRmOWFmOTcyY2E4986qcg==: 00:16:26.129 09:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.129 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.129 09:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:16:26.130 09:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.130 09:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.130 09:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.130 09:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.130 09:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:26.130 09:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:26.389 09:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:16:26.389 09:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.389 09:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:26.389 09:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:26.389 09:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:26.389 09:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.389 09:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.389 09:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.389 09:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.389 09:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.389 09:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.389 09:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.389 09:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.958 00:16:26.958 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.958 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.958 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.217 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.217 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.217 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.217 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.217 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.217 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:27.217 { 00:16:27.217 "cntlid": 141, 00:16:27.217 "qid": 0, 00:16:27.217 "state": "enabled", 00:16:27.217 "thread": "nvmf_tgt_poll_group_000", 00:16:27.217 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:16:27.217 "listen_address": { 00:16:27.217 "trtype": "TCP", 00:16:27.217 "adrfam": "IPv4", 00:16:27.217 "traddr": "10.0.0.3", 00:16:27.217 "trsvcid": "4420" 00:16:27.217 }, 00:16:27.217 "peer_address": { 00:16:27.217 "trtype": "TCP", 00:16:27.217 "adrfam": "IPv4", 00:16:27.217 "traddr": "10.0.0.1", 00:16:27.217 "trsvcid": "56500" 00:16:27.217 }, 00:16:27.217 "auth": { 00:16:27.217 "state": "completed", 00:16:27.217 "digest": "sha512", 00:16:27.217 "dhgroup": "ffdhe8192" 00:16:27.217 } 00:16:27.217 } 00:16:27.217 ]' 00:16:27.217 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:27.217 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:27.217 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:27.217 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:27.217 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:27.217 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.217 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.217 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.477 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDk2MDQ0ZThjODY3YjcxZGRlMTViYTU4YjZmOWQwM2MxZDllMGUxYzY4OTQzMzg5CaUkbw==: --dhchap-ctrl-secret DHHC-1:01:MjZmNzVjYTgxYWU0MzQxYzViYmY4NmIyOTY1YWZlYja6QNRW: 00:16:27.477 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:02:NDk2MDQ0ZThjODY3YjcxZGRlMTViYTU4YjZmOWQwM2MxZDllMGUxYzY4OTQzMzg5CaUkbw==: --dhchap-ctrl-secret DHHC-1:01:MjZmNzVjYTgxYWU0MzQxYzViYmY4NmIyOTY1YWZlYja6QNRW: 00:16:28.045 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.045 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.045 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:16:28.045 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.045 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.045 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.045 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:28.045 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:28.045 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:28.304 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:16:28.304 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:28.304 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:28.304 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:28.304 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:28.304 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.304 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key3 00:16:28.304 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.304 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.304 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.304 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:28.304 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:28.304 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:28.871 00:16:28.871 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.871 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.871 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.130 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.130 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.130 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.130 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.130 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.130 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:29.130 { 00:16:29.130 "cntlid": 143, 00:16:29.130 "qid": 0, 00:16:29.130 "state": "enabled", 00:16:29.130 "thread": "nvmf_tgt_poll_group_000", 00:16:29.130 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:16:29.130 "listen_address": { 00:16:29.130 "trtype": "TCP", 00:16:29.130 "adrfam": "IPv4", 00:16:29.130 "traddr": "10.0.0.3", 00:16:29.130 "trsvcid": "4420" 00:16:29.130 }, 00:16:29.130 "peer_address": { 00:16:29.130 "trtype": "TCP", 00:16:29.130 "adrfam": "IPv4", 00:16:29.130 "traddr": "10.0.0.1", 00:16:29.130 "trsvcid": "56528" 00:16:29.130 }, 00:16:29.130 "auth": { 00:16:29.130 "state": "completed", 00:16:29.130 "digest": "sha512", 00:16:29.130 "dhgroup": "ffdhe8192" 00:16:29.130 } 00:16:29.130 } 00:16:29.130 ]' 00:16:29.130 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:29.130 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:29.130 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:29.130 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:29.130 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:29.130 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.130 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.130 09:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.388 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODVkNDM0YmQ2YTIxOWFmOTlmNGM3MjVlYzM1MzQwNmYyNjRmZjJiYjI3NzYxNTE0YTI1NTNlNGIwYTFmMjk5MDSC2qw=: 00:16:29.388 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:03:ODVkNDM0YmQ2YTIxOWFmOTlmNGM3MjVlYzM1MzQwNmYyNjRmZjJiYjI3NzYxNTE0YTI1NTNlNGIwYTFmMjk5MDSC2qw=: 00:16:29.955 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.955 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.955 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:16:29.955 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.955 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.955 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.955 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:16:29.955 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:16:29.955 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:16:29.955 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:29.955 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:29.955 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:30.222 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:16:30.222 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:30.222 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:30.222 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:30.222 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:30.222 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.222 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.222 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.222 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.223 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.223 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.223 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.223 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.791 00:16:30.791 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.791 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.791 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.049 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.049 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.050 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.050 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.050 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.050 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.050 { 00:16:31.050 "cntlid": 145, 00:16:31.050 "qid": 0, 00:16:31.050 "state": "enabled", 00:16:31.050 "thread": "nvmf_tgt_poll_group_000", 00:16:31.050 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:16:31.050 "listen_address": { 00:16:31.050 "trtype": "TCP", 00:16:31.050 "adrfam": "IPv4", 00:16:31.050 "traddr": "10.0.0.3", 00:16:31.050 "trsvcid": "4420" 00:16:31.050 }, 00:16:31.050 "peer_address": { 00:16:31.050 "trtype": "TCP", 00:16:31.050 "adrfam": "IPv4", 00:16:31.050 "traddr": "10.0.0.1", 00:16:31.050 "trsvcid": "56560" 00:16:31.050 }, 00:16:31.050 "auth": { 00:16:31.050 "state": "completed", 00:16:31.050 "digest": "sha512", 00:16:31.050 "dhgroup": "ffdhe8192" 00:16:31.050 } 00:16:31.050 } 00:16:31.050 ]' 00:16:31.050 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.050 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:31.050 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.050 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:31.050 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.308 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.308 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.308 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.308 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTNjZTZjZGRjMDU1OTcyNWI5MzNhNTM3NDU3NTUxMmRiZjVhMzRjZThmMWYwZDI1YyatcQ==: --dhchap-ctrl-secret DHHC-1:03:OWRmZDExMjkzODNlYWJhNGE4ZDczMWFhOGY4NmYxMDYyMDgyNDc1N2Q4ODc5NTIyZTcyY2U1YmY5YTdlMzFmY4f98es=: 00:16:31.308 09:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:00:MTNjZTZjZGRjMDU1OTcyNWI5MzNhNTM3NDU3NTUxMmRiZjVhMzRjZThmMWYwZDI1YyatcQ==: --dhchap-ctrl-secret DHHC-1:03:OWRmZDExMjkzODNlYWJhNGE4ZDczMWFhOGY4NmYxMDYyMDgyNDc1N2Q4ODc5NTIyZTcyY2U1YmY5YTdlMzFmY4f98es=: 00:16:31.888 09:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.888 09:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:16:31.888 09:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.888 09:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.888 09:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.888 09:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key1 00:16:31.888 09:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.888 09:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.888 09:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.888 09:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:16:31.888 09:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:31.888 09:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:16:31.888 09:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:31.888 09:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:31.888 09:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:31.888 09:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:31.888 09:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:16:31.888 09:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:16:31.888 09:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:16:32.454 request: 00:16:32.454 { 00:16:32.454 "name": "nvme0", 00:16:32.454 "trtype": "tcp", 00:16:32.454 "traddr": "10.0.0.3", 00:16:32.454 "adrfam": "ipv4", 00:16:32.454 "trsvcid": "4420", 00:16:32.454 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:32.454 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:16:32.454 "prchk_reftag": false, 00:16:32.454 "prchk_guard": false, 00:16:32.454 "hdgst": false, 00:16:32.454 "ddgst": false, 00:16:32.454 "dhchap_key": "key2", 00:16:32.454 "allow_unrecognized_csi": false, 00:16:32.454 "method": "bdev_nvme_attach_controller", 00:16:32.454 "req_id": 1 00:16:32.454 } 00:16:32.454 Got JSON-RPC error response 00:16:32.454 response: 00:16:32.454 { 00:16:32.454 "code": -5, 00:16:32.454 "message": "Input/output error" 00:16:32.454 } 00:16:32.454 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:32.454 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:32.454 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:32.454 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:32.454 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:16:32.454 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.454 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.454 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.454 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.454 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.454 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.454 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.454 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:32.454 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:32.454 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:32.454 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:32.454 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:32.454 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:32.454 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:32.454 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:32.454 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:32.454 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:33.021 request: 00:16:33.021 { 00:16:33.021 "name": "nvme0", 00:16:33.021 "trtype": "tcp", 00:16:33.021 "traddr": "10.0.0.3", 00:16:33.021 "adrfam": "ipv4", 00:16:33.021 "trsvcid": "4420", 00:16:33.021 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:33.021 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:16:33.022 "prchk_reftag": false, 00:16:33.022 "prchk_guard": false, 00:16:33.022 "hdgst": false, 00:16:33.022 "ddgst": false, 00:16:33.022 "dhchap_key": "key1", 00:16:33.022 "dhchap_ctrlr_key": "ckey2", 00:16:33.022 "allow_unrecognized_csi": false, 00:16:33.022 "method": "bdev_nvme_attach_controller", 00:16:33.022 "req_id": 1 00:16:33.022 } 00:16:33.022 Got JSON-RPC error response 00:16:33.022 response: 00:16:33.022 { 00:16:33.022 "code": -5, 00:16:33.022 "message": "Input/output error" 00:16:33.022 } 00:16:33.022 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:33.022 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:33.022 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:33.022 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:33.022 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:16:33.022 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.022 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.022 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.022 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key1 00:16:33.022 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.022 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.022 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.022 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.022 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:33.022 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.022 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:33.022 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:33.022 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:33.022 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:33.022 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.022 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.022 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.588 request: 00:16:33.588 { 00:16:33.588 "name": "nvme0", 00:16:33.588 "trtype": "tcp", 00:16:33.588 "traddr": "10.0.0.3", 00:16:33.588 "adrfam": "ipv4", 00:16:33.588 "trsvcid": "4420", 00:16:33.588 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:33.588 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:16:33.588 "prchk_reftag": false, 00:16:33.588 "prchk_guard": false, 00:16:33.588 "hdgst": false, 00:16:33.588 "ddgst": false, 00:16:33.588 "dhchap_key": "key1", 00:16:33.588 "dhchap_ctrlr_key": "ckey1", 00:16:33.588 "allow_unrecognized_csi": false, 00:16:33.588 "method": "bdev_nvme_attach_controller", 00:16:33.588 "req_id": 1 00:16:33.588 } 00:16:33.588 Got JSON-RPC error response 00:16:33.588 response: 00:16:33.588 { 00:16:33.588 "code": -5, 00:16:33.588 "message": "Input/output error" 00:16:33.588 } 00:16:33.588 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:33.588 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:33.588 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:33.588 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:33.589 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:16:33.589 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.589 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.589 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.589 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 67152 00:16:33.589 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 67152 ']' 00:16:33.589 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 67152 00:16:33.589 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:16:33.589 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:33.589 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67152 00:16:33.589 killing process with pid 67152 00:16:33.589 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:33.589 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:33.589 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67152' 00:16:33.589 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 67152 00:16:33.589 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 67152 00:16:33.859 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:16:33.859 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:33.859 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:33.859 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.859 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=69964 00:16:33.859 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:16:33.859 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 69964 00:16:33.859 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 69964 ']' 00:16:33.859 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.859 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:33.859 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.859 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:33.859 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.796 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:34.796 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:34.796 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:34.796 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:34.796 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.797 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:34.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:34.797 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:34.797 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 69964 00:16:34.797 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 69964 ']' 00:16:34.797 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.797 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:34.797 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.797 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:34.797 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.056 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:35.056 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:35.056 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:16:35.056 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.056 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.056 null0 00:16:35.056 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.056 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:35.056 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.j7Q 00:16:35.056 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.056 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.056 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.056 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.YQs ]] 00:16:35.056 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.YQs 00:16:35.056 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.056 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.056 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.056 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:35.056 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.vJ1 00:16:35.056 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.056 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.056 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.056 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.YqO ]] 00:16:35.056 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.YqO 00:16:35.056 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.056 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.056 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.056 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:35.056 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.mXT 00:16:35.056 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.056 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.056 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.056 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.y1E ]] 00:16:35.056 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.y1E 00:16:35.056 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.056 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.315 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.315 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:35.315 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.KQC 00:16:35.315 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.315 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.315 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.315 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:16:35.315 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:16:35.315 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.315 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:35.315 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:35.315 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:35.315 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.315 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key3 00:16:35.315 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.315 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.315 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.315 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:35.315 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:35.315 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:35.883 nvme0n1 00:16:36.142 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.142 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.142 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.142 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.142 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.142 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.142 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.142 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.142 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.142 { 00:16:36.142 "cntlid": 1, 00:16:36.142 "qid": 0, 00:16:36.142 "state": "enabled", 00:16:36.142 "thread": "nvmf_tgt_poll_group_000", 00:16:36.142 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:16:36.142 "listen_address": { 00:16:36.142 "trtype": "TCP", 00:16:36.142 "adrfam": "IPv4", 00:16:36.142 "traddr": "10.0.0.3", 00:16:36.142 "trsvcid": "4420" 00:16:36.142 }, 00:16:36.142 "peer_address": { 00:16:36.142 "trtype": "TCP", 00:16:36.142 "adrfam": "IPv4", 00:16:36.142 "traddr": "10.0.0.1", 00:16:36.142 "trsvcid": "46906" 00:16:36.142 }, 00:16:36.142 "auth": { 00:16:36.142 "state": "completed", 00:16:36.142 "digest": "sha512", 00:16:36.142 "dhgroup": "ffdhe8192" 00:16:36.142 } 00:16:36.142 } 00:16:36.142 ]' 00:16:36.142 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.401 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:36.401 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.401 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:36.401 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.401 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.401 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.401 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.661 09:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODVkNDM0YmQ2YTIxOWFmOTlmNGM3MjVlYzM1MzQwNmYyNjRmZjJiYjI3NzYxNTE0YTI1NTNlNGIwYTFmMjk5MDSC2qw=: 00:16:36.661 09:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:03:ODVkNDM0YmQ2YTIxOWFmOTlmNGM3MjVlYzM1MzQwNmYyNjRmZjJiYjI3NzYxNTE0YTI1NTNlNGIwYTFmMjk5MDSC2qw=: 00:16:37.230 09:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.230 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.230 09:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:16:37.230 09:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.230 09:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.230 09:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.230 09:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key3 00:16:37.230 09:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.230 09:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.230 09:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.230 09:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:16:37.230 09:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:16:37.490 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:16:37.490 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:37.490 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:16:37.490 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:37.490 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:37.490 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:37.490 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:37.490 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:37.490 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:37.490 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:37.749 request: 00:16:37.749 { 00:16:37.749 "name": "nvme0", 00:16:37.749 "trtype": "tcp", 00:16:37.749 "traddr": "10.0.0.3", 00:16:37.749 "adrfam": "ipv4", 00:16:37.749 "trsvcid": "4420", 00:16:37.749 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:37.749 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:16:37.749 "prchk_reftag": false, 00:16:37.749 "prchk_guard": false, 00:16:37.749 "hdgst": false, 00:16:37.749 "ddgst": false, 00:16:37.750 "dhchap_key": "key3", 00:16:37.750 "allow_unrecognized_csi": false, 00:16:37.750 "method": "bdev_nvme_attach_controller", 00:16:37.750 "req_id": 1 00:16:37.750 } 00:16:37.750 Got JSON-RPC error response 00:16:37.750 response: 00:16:37.750 { 00:16:37.750 "code": -5, 00:16:37.750 "message": "Input/output error" 00:16:37.750 } 00:16:37.750 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:37.750 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:37.750 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:37.750 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:37.750 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:16:37.750 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:16:37.750 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:37.750 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:37.750 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:16:37.750 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:37.750 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:16:37.750 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:37.750 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:37.750 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:37.750 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:37.750 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:37.750 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:37.750 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:38.009 request: 00:16:38.009 { 00:16:38.009 "name": "nvme0", 00:16:38.009 "trtype": "tcp", 00:16:38.009 "traddr": "10.0.0.3", 00:16:38.009 "adrfam": "ipv4", 00:16:38.009 "trsvcid": "4420", 00:16:38.009 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:38.009 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:16:38.009 "prchk_reftag": false, 00:16:38.009 "prchk_guard": false, 00:16:38.009 "hdgst": false, 00:16:38.009 "ddgst": false, 00:16:38.009 "dhchap_key": "key3", 00:16:38.009 "allow_unrecognized_csi": false, 00:16:38.009 "method": "bdev_nvme_attach_controller", 00:16:38.009 "req_id": 1 00:16:38.009 } 00:16:38.009 Got JSON-RPC error response 00:16:38.009 response: 00:16:38.009 { 00:16:38.009 "code": -5, 00:16:38.009 "message": "Input/output error" 00:16:38.009 } 00:16:38.009 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:38.009 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:38.009 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:38.009 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:38.009 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:16:38.009 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:16:38.009 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:16:38.009 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:38.009 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:38.009 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:38.267 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:16:38.268 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.268 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.268 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.268 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:16:38.268 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.268 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.268 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.268 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:38.268 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:38.268 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:38.268 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:38.268 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:38.268 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:38.268 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:38.268 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:38.268 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:38.268 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:38.867 request: 00:16:38.867 { 00:16:38.867 "name": "nvme0", 00:16:38.867 "trtype": "tcp", 00:16:38.867 "traddr": "10.0.0.3", 00:16:38.867 "adrfam": "ipv4", 00:16:38.867 "trsvcid": "4420", 00:16:38.867 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:38.867 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:16:38.867 "prchk_reftag": false, 00:16:38.867 "prchk_guard": false, 00:16:38.867 "hdgst": false, 00:16:38.867 "ddgst": false, 00:16:38.867 "dhchap_key": "key0", 00:16:38.867 "dhchap_ctrlr_key": "key1", 00:16:38.867 "allow_unrecognized_csi": false, 00:16:38.867 "method": "bdev_nvme_attach_controller", 00:16:38.867 "req_id": 1 00:16:38.867 } 00:16:38.867 Got JSON-RPC error response 00:16:38.867 response: 00:16:38.867 { 00:16:38.867 "code": -5, 00:16:38.867 "message": "Input/output error" 00:16:38.867 } 00:16:38.867 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:38.867 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:38.867 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:38.867 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:38.867 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:16:38.867 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:16:38.867 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:16:38.867 nvme0n1 00:16:38.867 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:16:38.867 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.867 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:16:39.125 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.125 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.125 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.384 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key1 00:16:39.384 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.384 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.384 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.384 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:16:39.384 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:39.384 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:40.319 nvme0n1 00:16:40.319 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:16:40.319 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:16:40.319 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.319 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.319 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:40.319 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.319 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.319 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.319 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:16:40.319 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.319 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:16:40.576 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.576 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NDk2MDQ0ZThjODY3YjcxZGRlMTViYTU4YjZmOWQwM2MxZDllMGUxYzY4OTQzMzg5CaUkbw==: --dhchap-ctrl-secret DHHC-1:03:ODVkNDM0YmQ2YTIxOWFmOTlmNGM3MjVlYzM1MzQwNmYyNjRmZjJiYjI3NzYxNTE0YTI1NTNlNGIwYTFmMjk5MDSC2qw=: 00:16:40.576 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -l 0 --dhchap-secret DHHC-1:02:NDk2MDQ0ZThjODY3YjcxZGRlMTViYTU4YjZmOWQwM2MxZDllMGUxYzY4OTQzMzg5CaUkbw==: --dhchap-ctrl-secret DHHC-1:03:ODVkNDM0YmQ2YTIxOWFmOTlmNGM3MjVlYzM1MzQwNmYyNjRmZjJiYjI3NzYxNTE0YTI1NTNlNGIwYTFmMjk5MDSC2qw=: 00:16:41.140 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:16:41.140 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:16:41.140 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:16:41.140 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:16:41.140 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:16:41.140 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:16:41.140 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:16:41.140 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.140 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.398 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:16:41.398 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:41.398 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:16:41.398 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:41.398 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:41.398 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:41.398 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:41.398 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:16:41.398 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:41.398 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:41.963 request: 00:16:41.963 { 00:16:41.963 "name": "nvme0", 00:16:41.963 "trtype": "tcp", 00:16:41.963 "traddr": "10.0.0.3", 00:16:41.963 "adrfam": "ipv4", 00:16:41.963 "trsvcid": "4420", 00:16:41.963 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:41.963 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68", 00:16:41.963 "prchk_reftag": false, 00:16:41.963 "prchk_guard": false, 00:16:41.963 "hdgst": false, 00:16:41.963 "ddgst": false, 00:16:41.963 "dhchap_key": "key1", 00:16:41.963 "allow_unrecognized_csi": false, 00:16:41.963 "method": "bdev_nvme_attach_controller", 00:16:41.963 "req_id": 1 00:16:41.963 } 00:16:41.963 Got JSON-RPC error response 00:16:41.963 response: 00:16:41.963 { 00:16:41.963 "code": -5, 00:16:41.963 "message": "Input/output error" 00:16:41.963 } 00:16:41.963 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:41.963 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:41.963 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:41.963 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:41.963 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:41.963 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:41.963 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:42.895 nvme0n1 00:16:42.895 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:16:42.895 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.895 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:16:42.895 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.895 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.896 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.153 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:16:43.153 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.153 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.153 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.153 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:16:43.153 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:16:43.153 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:16:43.411 nvme0n1 00:16:43.411 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:16:43.411 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.411 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:16:43.669 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.669 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.669 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.928 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:43.928 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.928 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.928 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.928 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ODUzZGYyNDNhMmFlNDgwYzJhMTQwYzQyNWFiZWQxZWSsSQKw: '' 2s 00:16:43.928 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:16:43.928 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:16:43.928 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ODUzZGYyNDNhMmFlNDgwYzJhMTQwYzQyNWFiZWQxZWSsSQKw: 00:16:43.928 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:16:43.928 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:16:43.928 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:16:43.928 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ODUzZGYyNDNhMmFlNDgwYzJhMTQwYzQyNWFiZWQxZWSsSQKw: ]] 00:16:43.928 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ODUzZGYyNDNhMmFlNDgwYzJhMTQwYzQyNWFiZWQxZWSsSQKw: 00:16:43.928 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:16:43.928 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:16:43.928 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:16:45.832 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:16:45.832 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:16:45.832 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:45.832 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:16:45.832 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:45.832 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:16:45.832 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:16:45.832 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key1 --dhchap-ctrlr-key key2 00:16:45.832 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.832 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.091 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.091 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NDk2MDQ0ZThjODY3YjcxZGRlMTViYTU4YjZmOWQwM2MxZDllMGUxYzY4OTQzMzg5CaUkbw==: 2s 00:16:46.091 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:16:46.091 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:16:46.091 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:16:46.091 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NDk2MDQ0ZThjODY3YjcxZGRlMTViYTU4YjZmOWQwM2MxZDllMGUxYzY4OTQzMzg5CaUkbw==: 00:16:46.091 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:16:46.091 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:16:46.091 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:16:46.091 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NDk2MDQ0ZThjODY3YjcxZGRlMTViYTU4YjZmOWQwM2MxZDllMGUxYzY4OTQzMzg5CaUkbw==: ]] 00:16:46.091 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NDk2MDQ0ZThjODY3YjcxZGRlMTViYTU4YjZmOWQwM2MxZDllMGUxYzY4OTQzMzg5CaUkbw==: 00:16:46.091 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:16:46.091 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:16:47.993 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:16:47.993 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:16:47.993 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:16:47.993 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:47.993 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:47.993 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:16:47.993 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:16:47.993 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.993 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.993 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:47.993 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.993 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.993 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.993 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:47.993 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:47.993 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:48.929 nvme0n1 00:16:48.929 09:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:48.930 09:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.930 09:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.930 09:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.930 09:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:48.930 09:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:49.495 09:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:16:49.495 09:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:16:49.495 09:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.495 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.495 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:16:49.495 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.495 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.753 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.753 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:16:49.753 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:16:49.753 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:16:49.753 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.753 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:16:50.010 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.010 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:50.010 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.010 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.010 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.010 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:50.010 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:50.010 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:50.010 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:16:50.010 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:50.010 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:16:50.010 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:50.010 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:50.010 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:50.596 request: 00:16:50.596 { 00:16:50.596 "name": "nvme0", 00:16:50.596 "dhchap_key": "key1", 00:16:50.596 "dhchap_ctrlr_key": "key3", 00:16:50.596 "method": "bdev_nvme_set_keys", 00:16:50.596 "req_id": 1 00:16:50.596 } 00:16:50.596 Got JSON-RPC error response 00:16:50.596 response: 00:16:50.596 { 00:16:50.596 "code": -13, 00:16:50.596 "message": "Permission denied" 00:16:50.596 } 00:16:50.596 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:50.596 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:50.596 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:50.596 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:50.596 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:16:50.596 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:16:50.596 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.853 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:16:50.853 09:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:16:51.782 09:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:16:51.782 09:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:16:51.782 09:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.040 09:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:16:52.040 09:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:52.040 09:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.040 09:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.040 09:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.040 09:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:52.040 09:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:52.040 09:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:52.970 nvme0n1 00:16:52.970 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:52.970 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.970 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.970 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.970 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:16:52.970 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:52.970 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:16:52.970 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:16:52.970 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:52.970 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:16:52.970 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:52.970 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:16:52.970 09:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:16:53.534 request: 00:16:53.534 { 00:16:53.534 "name": "nvme0", 00:16:53.534 "dhchap_key": "key2", 00:16:53.534 "dhchap_ctrlr_key": "key0", 00:16:53.534 "method": "bdev_nvme_set_keys", 00:16:53.534 "req_id": 1 00:16:53.534 } 00:16:53.534 Got JSON-RPC error response 00:16:53.534 response: 00:16:53.534 { 00:16:53.534 "code": -13, 00:16:53.534 "message": "Permission denied" 00:16:53.534 } 00:16:53.534 09:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:53.534 09:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:53.534 09:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:53.534 09:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:53.534 09:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:16:53.534 09:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:16:53.534 09:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.791 09:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:16:53.791 09:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:16:54.722 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:16:54.722 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:16:54.722 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.980 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:16:54.980 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:16:54.980 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:16:54.980 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 67184 00:16:54.980 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 67184 ']' 00:16:54.980 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 67184 00:16:54.980 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:16:54.980 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:54.980 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67184 00:16:54.980 killing process with pid 67184 00:16:54.980 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:54.980 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:54.980 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67184' 00:16:54.980 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 67184 00:16:54.980 09:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 67184 00:16:55.545 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:16:55.545 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:55.545 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:16:55.545 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:55.546 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:16:55.546 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:55.546 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:55.546 rmmod nvme_tcp 00:16:55.546 rmmod nvme_fabrics 00:16:55.546 rmmod nvme_keyring 00:16:55.546 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:55.546 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:16:55.546 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:16:55.546 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 69964 ']' 00:16:55.546 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 69964 00:16:55.546 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 69964 ']' 00:16:55.546 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 69964 00:16:55.546 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:16:55.546 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:55.546 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69964 00:16:55.546 killing process with pid 69964 00:16:55.546 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:55.546 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:55.546 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69964' 00:16:55.546 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 69964 00:16:55.546 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 69964 00:16:55.804 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:55.804 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:55.804 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:55.804 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:16:55.804 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:16:55.804 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:55.804 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:16:55.804 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:55.804 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:55.804 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:55.804 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:55.804 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:55.804 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:55.804 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:55.804 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:55.804 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:55.804 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:55.804 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:56.063 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:56.063 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:56.063 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:56.063 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:56.063 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:56.063 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.063 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:56.063 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.063 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:16:56.063 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.j7Q /tmp/spdk.key-sha256.vJ1 /tmp/spdk.key-sha384.mXT /tmp/spdk.key-sha512.KQC /tmp/spdk.key-sha512.YQs /tmp/spdk.key-sha384.YqO /tmp/spdk.key-sha256.y1E '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:16:56.063 00:16:56.063 real 2m43.335s 00:16:56.063 user 6m14.088s 00:16:56.063 sys 0m35.266s 00:16:56.063 ************************************ 00:16:56.063 END TEST nvmf_auth_target 00:16:56.063 ************************************ 00:16:56.063 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:56.063 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.322 09:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:16:56.322 09:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:56.322 09:27:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:56.322 09:27:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:56.322 09:27:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:56.322 ************************************ 00:16:56.322 START TEST nvmf_bdevio_no_huge 00:16:56.322 ************************************ 00:16:56.322 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:56.322 * Looking for test storage... 00:16:56.322 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:56.322 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:56.322 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:16:56.322 09:27:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:56.322 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:56.322 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:56.322 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:56.322 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:56.322 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:16:56.322 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:16:56.322 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:16:56.322 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:16:56.322 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:16:56.322 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:16:56.322 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:16:56.322 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:56.322 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:16:56.322 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:16:56.322 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:56.322 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:56.322 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:16:56.322 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:16:56.322 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:56.322 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:16:56.322 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:16:56.322 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:16:56.322 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:16:56.322 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:56.322 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:16:56.322 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:16:56.322 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:56.322 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:56.322 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:16:56.322 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:56.322 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:56.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.322 --rc genhtml_branch_coverage=1 00:16:56.322 --rc genhtml_function_coverage=1 00:16:56.322 --rc genhtml_legend=1 00:16:56.322 --rc geninfo_all_blocks=1 00:16:56.322 --rc geninfo_unexecuted_blocks=1 00:16:56.322 00:16:56.322 ' 00:16:56.322 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:56.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.322 --rc genhtml_branch_coverage=1 00:16:56.322 --rc genhtml_function_coverage=1 00:16:56.322 --rc genhtml_legend=1 00:16:56.322 --rc geninfo_all_blocks=1 00:16:56.322 --rc geninfo_unexecuted_blocks=1 00:16:56.322 00:16:56.322 ' 00:16:56.322 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:56.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.322 --rc genhtml_branch_coverage=1 00:16:56.322 --rc genhtml_function_coverage=1 00:16:56.322 --rc genhtml_legend=1 00:16:56.322 --rc geninfo_all_blocks=1 00:16:56.322 --rc geninfo_unexecuted_blocks=1 00:16:56.322 00:16:56.322 ' 00:16:56.323 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:56.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.323 --rc genhtml_branch_coverage=1 00:16:56.323 --rc genhtml_function_coverage=1 00:16:56.323 --rc genhtml_legend=1 00:16:56.323 --rc geninfo_all_blocks=1 00:16:56.323 --rc geninfo_unexecuted_blocks=1 00:16:56.323 00:16:56.323 ' 00:16:56.323 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:56.323 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:16:56.582 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:56.582 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:56.582 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:56.582 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:56.582 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:56.582 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:56.582 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:56.582 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:56.582 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:56.582 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:56.582 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:16:56.582 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:16:56.582 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:56.582 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:56.582 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:56.582 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:56.582 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:56.582 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:16:56.582 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:56.582 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:56.582 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:56.582 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.582 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.582 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.582 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:16:56.582 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.582 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:16:56.582 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:56.582 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:56.582 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:56.582 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:56.582 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:56.582 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:56.582 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:56.582 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:56.582 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:56.582 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:56.582 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:56.582 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:56.582 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:16:56.582 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:56.582 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:56.582 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:56.582 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:56.582 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:56.582 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.582 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:56.582 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.582 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:56.582 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:56.583 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:56.583 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:56.583 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:56.583 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:56.583 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:56.583 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:56.583 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:56.583 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:56.583 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:56.583 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:56.583 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:56.583 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:56.583 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:56.583 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:56.583 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:56.583 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:56.583 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:56.583 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:56.583 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:56.583 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:56.583 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:56.583 Cannot find device "nvmf_init_br" 00:16:56.583 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:16:56.583 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:56.583 Cannot find device "nvmf_init_br2" 00:16:56.583 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:16:56.583 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:56.583 Cannot find device "nvmf_tgt_br" 00:16:56.583 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:16:56.583 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:56.583 Cannot find device "nvmf_tgt_br2" 00:16:56.583 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:16:56.583 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:56.583 Cannot find device "nvmf_init_br" 00:16:56.583 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:16:56.583 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:56.583 Cannot find device "nvmf_init_br2" 00:16:56.583 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:16:56.583 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:56.583 Cannot find device "nvmf_tgt_br" 00:16:56.583 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:16:56.583 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:56.583 Cannot find device "nvmf_tgt_br2" 00:16:56.583 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:16:56.583 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:56.583 Cannot find device "nvmf_br" 00:16:56.583 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:16:56.583 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:56.583 Cannot find device "nvmf_init_if" 00:16:56.583 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:16:56.583 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:56.583 Cannot find device "nvmf_init_if2" 00:16:56.583 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:16:56.583 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:56.843 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:56.843 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:16:56.843 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:56.843 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:56.843 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:16:56.843 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:56.843 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:56.843 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:56.843 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:56.843 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:56.843 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:56.843 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:56.843 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:56.843 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:56.843 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:56.843 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:56.843 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:56.843 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:56.843 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:56.843 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:56.843 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:56.843 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:56.843 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:56.843 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:56.843 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:56.843 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:56.843 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:56.843 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:56.843 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:56.843 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:56.843 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:57.102 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:57.102 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:57.102 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:57.102 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:57.102 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:57.102 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:57.102 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:57.102 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:57.102 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:16:57.102 00:16:57.102 --- 10.0.0.3 ping statistics --- 00:16:57.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.102 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:16:57.102 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:57.102 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:57.102 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:16:57.102 00:16:57.102 --- 10.0.0.4 ping statistics --- 00:16:57.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.102 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:16:57.102 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:57.102 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:57.102 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:16:57.102 00:16:57.102 --- 10.0.0.1 ping statistics --- 00:16:57.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.102 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:16:57.103 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:57.103 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:57.103 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:16:57.103 00:16:57.103 --- 10.0.0.2 ping statistics --- 00:16:57.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.103 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:16:57.103 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:57.103 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:16:57.103 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:57.103 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:57.103 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:57.103 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:57.103 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:57.103 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:57.103 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:57.103 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:57.103 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:57.103 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:57.103 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:57.103 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=70580 00:16:57.103 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:16:57.103 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 70580 00:16:57.103 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 70580 ']' 00:16:57.103 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.103 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:57.103 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.103 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:57.103 09:27:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:57.103 [2024-12-09 09:27:34.716689] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:16:57.103 [2024-12-09 09:27:34.716757] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:16:57.363 [2024-12-09 09:27:34.879129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:57.363 [2024-12-09 09:27:34.945907] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:57.363 [2024-12-09 09:27:34.945965] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:57.363 [2024-12-09 09:27:34.945981] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:57.363 [2024-12-09 09:27:34.945990] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:57.363 [2024-12-09 09:27:34.945997] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:57.363 [2024-12-09 09:27:34.946523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:16:57.363 [2024-12-09 09:27:34.946899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:16:57.363 [2024-12-09 09:27:34.947109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:57.363 [2024-12-09 09:27:34.947112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:16:57.363 [2024-12-09 09:27:34.952229] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:57.935 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:57.935 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:16:57.935 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:57.935 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:57.935 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:58.194 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:58.194 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:58.194 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.194 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:58.194 [2024-12-09 09:27:35.676834] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:58.194 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.194 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:58.194 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.194 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:58.194 Malloc0 00:16:58.194 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.194 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:58.194 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.194 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:58.194 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.194 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:58.194 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.195 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:58.195 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.195 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:58.195 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.195 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:58.195 [2024-12-09 09:27:35.733925] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:58.195 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.195 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:16:58.195 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:58.195 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:16:58.195 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:16:58.195 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:16:58.195 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:16:58.195 { 00:16:58.195 "params": { 00:16:58.195 "name": "Nvme$subsystem", 00:16:58.195 "trtype": "$TEST_TRANSPORT", 00:16:58.195 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:58.195 "adrfam": "ipv4", 00:16:58.195 "trsvcid": "$NVMF_PORT", 00:16:58.195 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:58.195 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:58.195 "hdgst": ${hdgst:-false}, 00:16:58.195 "ddgst": ${ddgst:-false} 00:16:58.195 }, 00:16:58.195 "method": "bdev_nvme_attach_controller" 00:16:58.195 } 00:16:58.195 EOF 00:16:58.195 )") 00:16:58.195 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:16:58.195 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:16:58.195 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:16:58.195 09:27:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:16:58.195 "params": { 00:16:58.195 "name": "Nvme1", 00:16:58.195 "trtype": "tcp", 00:16:58.195 "traddr": "10.0.0.3", 00:16:58.195 "adrfam": "ipv4", 00:16:58.195 "trsvcid": "4420", 00:16:58.195 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:58.195 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:58.195 "hdgst": false, 00:16:58.195 "ddgst": false 00:16:58.195 }, 00:16:58.195 "method": "bdev_nvme_attach_controller" 00:16:58.195 }' 00:16:58.195 [2024-12-09 09:27:35.793036] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:16:58.195 [2024-12-09 09:27:35.793307] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid70616 ] 00:16:58.453 [2024-12-09 09:27:35.954249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:58.454 [2024-12-09 09:27:36.018920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:58.454 [2024-12-09 09:27:36.019115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.454 [2024-12-09 09:27:36.019115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:58.454 [2024-12-09 09:27:36.032041] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:58.713 I/O targets: 00:16:58.713 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:58.713 00:16:58.713 00:16:58.713 CUnit - A unit testing framework for C - Version 2.1-3 00:16:58.713 http://cunit.sourceforge.net/ 00:16:58.713 00:16:58.713 00:16:58.713 Suite: bdevio tests on: Nvme1n1 00:16:58.713 Test: blockdev write read block ...passed 00:16:58.713 Test: blockdev write zeroes read block ...passed 00:16:58.713 Test: blockdev write zeroes read no split ...passed 00:16:58.713 Test: blockdev write zeroes read split ...passed 00:16:58.713 Test: blockdev write zeroes read split partial ...passed 00:16:58.713 Test: blockdev reset ...[2024-12-09 09:27:36.265804] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:16:58.713 [2024-12-09 09:27:36.266143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a7e90 (9): Bad file descriptor 00:16:58.713 [2024-12-09 09:27:36.281230] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:16:58.713 passed 00:16:58.713 Test: blockdev write read 8 blocks ...passed 00:16:58.713 Test: blockdev write read size > 128k ...passed 00:16:58.713 Test: blockdev write read invalid size ...passed 00:16:58.713 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:58.713 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:58.713 Test: blockdev write read max offset ...passed 00:16:58.713 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:58.713 Test: blockdev writev readv 8 blocks ...passed 00:16:58.713 Test: blockdev writev readv 30 x 1block ...passed 00:16:58.713 Test: blockdev writev readv block ...passed 00:16:58.713 Test: blockdev writev readv size > 128k ...passed 00:16:58.713 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:58.713 Test: blockdev comparev and writev ...[2024-12-09 09:27:36.290889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:58.713 [2024-12-09 09:27:36.291082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:58.713 [2024-12-09 09:27:36.291110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:58.713 [2024-12-09 09:27:36.291121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:58.713 [2024-12-09 09:27:36.291374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:58.713 [2024-12-09 09:27:36.291391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:58.713 [2024-12-09 09:27:36.291406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:58.713 [2024-12-09 09:27:36.291416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:58.713 [2024-12-09 09:27:36.291656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:58.713 [2024-12-09 09:27:36.291672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:58.713 [2024-12-09 09:27:36.291687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:58.713 [2024-12-09 09:27:36.291697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:58.713 [2024-12-09 09:27:36.291927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:58.713 [2024-12-09 09:27:36.291942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:58.713 [2024-12-09 09:27:36.291957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:58.713 [2024-12-09 09:27:36.291966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:58.713 passed 00:16:58.713 Test: blockdev nvme passthru rw ...passed 00:16:58.713 Test: blockdev nvme passthru vendor specific ...[2024-12-09 09:27:36.292808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:58.713 [2024-12-09 09:27:36.292829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:58.713 [2024-12-09 09:27:36.292910] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:58.713 [2024-12-09 09:27:36.292926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:58.713 [2024-12-09 09:27:36.293000] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:58.713 [2024-12-09 09:27:36.293015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:58.713 [2024-12-09 09:27:36.293101] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:58.713 [2024-12-09 09:27:36.293116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:58.713 passed 00:16:58.713 Test: blockdev nvme admin passthru ...passed 00:16:58.713 Test: blockdev copy ...passed 00:16:58.713 00:16:58.713 Run Summary: Type Total Ran Passed Failed Inactive 00:16:58.713 suites 1 1 n/a 0 0 00:16:58.713 tests 23 23 23 0 0 00:16:58.713 asserts 152 152 152 0 n/a 00:16:58.713 00:16:58.713 Elapsed time = 0.173 seconds 00:16:58.971 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:58.971 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.971 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:58.971 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.971 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:58.971 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:16:58.971 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:58.971 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:16:59.229 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:59.229 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:16:59.229 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:59.229 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:59.229 rmmod nvme_tcp 00:16:59.229 rmmod nvme_fabrics 00:16:59.229 rmmod nvme_keyring 00:16:59.229 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:59.229 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:16:59.229 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:16:59.229 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 70580 ']' 00:16:59.229 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 70580 00:16:59.229 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 70580 ']' 00:16:59.229 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 70580 00:16:59.229 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:16:59.229 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:59.229 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70580 00:16:59.229 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:16:59.229 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:16:59.229 killing process with pid 70580 00:16:59.229 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70580' 00:16:59.229 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 70580 00:16:59.229 09:27:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 70580 00:16:59.795 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:59.795 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:59.795 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:59.795 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:16:59.795 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:16:59.795 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:59.795 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:16:59.795 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:59.795 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:59.795 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:59.795 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:59.795 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:59.795 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:59.795 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:59.795 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:59.795 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:59.796 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:59.796 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:59.796 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:59.796 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:59.796 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:59.796 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:59.796 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:59.796 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:59.796 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:59.796 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.058 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:17:00.058 ************************************ 00:17:00.058 END TEST nvmf_bdevio_no_huge 00:17:00.058 ************************************ 00:17:00.058 00:17:00.058 real 0m3.722s 00:17:00.058 user 0m10.345s 00:17:00.058 sys 0m1.650s 00:17:00.058 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:00.058 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:00.058 09:27:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:00.058 09:27:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:00.058 09:27:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:00.058 09:27:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:00.058 ************************************ 00:17:00.058 START TEST nvmf_tls 00:17:00.058 ************************************ 00:17:00.058 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:00.058 * Looking for test storage... 00:17:00.058 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:00.058 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:00.058 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:17:00.058 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:00.364 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:00.364 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:00.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.365 --rc genhtml_branch_coverage=1 00:17:00.365 --rc genhtml_function_coverage=1 00:17:00.365 --rc genhtml_legend=1 00:17:00.365 --rc geninfo_all_blocks=1 00:17:00.365 --rc geninfo_unexecuted_blocks=1 00:17:00.365 00:17:00.365 ' 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:00.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.365 --rc genhtml_branch_coverage=1 00:17:00.365 --rc genhtml_function_coverage=1 00:17:00.365 --rc genhtml_legend=1 00:17:00.365 --rc geninfo_all_blocks=1 00:17:00.365 --rc geninfo_unexecuted_blocks=1 00:17:00.365 00:17:00.365 ' 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:00.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.365 --rc genhtml_branch_coverage=1 00:17:00.365 --rc genhtml_function_coverage=1 00:17:00.365 --rc genhtml_legend=1 00:17:00.365 --rc geninfo_all_blocks=1 00:17:00.365 --rc geninfo_unexecuted_blocks=1 00:17:00.365 00:17:00.365 ' 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:00.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.365 --rc genhtml_branch_coverage=1 00:17:00.365 --rc genhtml_function_coverage=1 00:17:00.365 --rc genhtml_legend=1 00:17:00.365 --rc geninfo_all_blocks=1 00:17:00.365 --rc geninfo_unexecuted_blocks=1 00:17:00.365 00:17:00.365 ' 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:00.365 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:00.365 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:00.366 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.366 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:00.366 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:00.366 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:00.366 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:00.366 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:00.366 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:00.366 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:00.366 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:00.366 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:00.366 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:00.366 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:00.366 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:00.366 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:00.366 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:00.366 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:00.366 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:00.366 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:00.366 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:00.366 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:00.366 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:00.366 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:00.366 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:00.366 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:00.366 Cannot find device "nvmf_init_br" 00:17:00.366 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:17:00.366 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:00.366 Cannot find device "nvmf_init_br2" 00:17:00.366 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:17:00.366 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:00.366 Cannot find device "nvmf_tgt_br" 00:17:00.366 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:17:00.366 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:00.366 Cannot find device "nvmf_tgt_br2" 00:17:00.366 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:17:00.366 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:00.366 Cannot find device "nvmf_init_br" 00:17:00.366 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:17:00.366 09:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:00.366 Cannot find device "nvmf_init_br2" 00:17:00.366 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:17:00.366 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:00.366 Cannot find device "nvmf_tgt_br" 00:17:00.366 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:17:00.366 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:00.366 Cannot find device "nvmf_tgt_br2" 00:17:00.366 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:17:00.366 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:00.366 Cannot find device "nvmf_br" 00:17:00.366 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:17:00.366 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:00.631 Cannot find device "nvmf_init_if" 00:17:00.631 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:17:00.631 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:00.631 Cannot find device "nvmf_init_if2" 00:17:00.631 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:17:00.631 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:00.631 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:00.631 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:17:00.631 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:00.631 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:00.631 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:17:00.631 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:00.631 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:00.631 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:00.631 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:00.631 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:00.631 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:00.631 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:00.631 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:00.631 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:00.631 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:00.631 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:00.631 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:00.631 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:00.631 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:00.631 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:00.631 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:00.631 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:00.631 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:00.631 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:00.631 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:00.631 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:00.631 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:00.631 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:00.891 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:00.891 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:00.891 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:00.891 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:00.891 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:00.891 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:00.891 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:00.891 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:00.891 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:00.891 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:00.891 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:00.891 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.097 ms 00:17:00.891 00:17:00.891 --- 10.0.0.3 ping statistics --- 00:17:00.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:00.891 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:17:00.891 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:00.891 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:00.891 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:17:00.891 00:17:00.891 --- 10.0.0.4 ping statistics --- 00:17:00.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:00.891 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:17:00.891 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:00.891 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:00.891 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:17:00.891 00:17:00.891 --- 10.0.0.1 ping statistics --- 00:17:00.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:00.891 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:17:00.891 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:00.891 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:00.891 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:17:00.891 00:17:00.891 --- 10.0.0.2 ping statistics --- 00:17:00.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:00.891 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:17:00.891 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:00.891 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:17:00.891 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:00.891 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:00.891 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:00.891 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:00.891 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:00.891 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:00.891 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:00.891 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:00.891 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:00.891 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:00.891 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:00.891 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=70855 00:17:00.891 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 70855 00:17:00.891 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 70855 ']' 00:17:00.891 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:00.891 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:00.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:00.891 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:00.891 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:00.891 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:00.891 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:00.891 [2024-12-09 09:27:38.532134] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:17:00.891 [2024-12-09 09:27:38.532199] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:01.150 [2024-12-09 09:27:38.686976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.150 [2024-12-09 09:27:38.750567] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:01.150 [2024-12-09 09:27:38.750621] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:01.150 [2024-12-09 09:27:38.750631] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:01.150 [2024-12-09 09:27:38.750640] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:01.150 [2024-12-09 09:27:38.750647] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:01.150 [2024-12-09 09:27:38.750996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:01.715 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:01.715 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:01.715 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:01.715 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:01.715 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:01.973 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:01.973 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:17:01.973 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:01.973 true 00:17:01.973 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:01.973 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:17:02.231 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:17:02.231 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:17:02.231 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:02.489 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:02.489 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:17:02.747 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:17:02.747 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:17:02.747 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:02.747 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:02.747 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:17:03.006 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:17:03.006 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:17:03.006 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:03.006 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:17:03.264 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:17:03.264 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:17:03.264 09:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:03.523 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:03.523 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:17:03.781 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:17:03.781 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:17:03.781 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:04.039 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:04.039 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:17:04.039 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:17:04.039 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:17:04.039 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:04.039 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:04.039 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:04.039 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:04.039 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:17:04.039 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:04.039 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:04.298 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:04.298 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:04.298 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:04.298 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:04.298 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:04.298 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:17:04.298 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:04.298 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:04.298 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:04.298 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:04.298 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.hu81noV0jy 00:17:04.298 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:17:04.298 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.gOVsKITlaA 00:17:04.298 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:04.298 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:04.298 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.hu81noV0jy 00:17:04.298 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.gOVsKITlaA 00:17:04.298 09:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:04.557 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:04.816 [2024-12-09 09:27:42.304714] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:04.817 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.hu81noV0jy 00:17:04.817 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.hu81noV0jy 00:17:04.817 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:05.076 [2024-12-09 09:27:42.562970] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:05.076 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:05.076 09:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:17:05.335 [2024-12-09 09:27:42.994390] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:05.335 [2024-12-09 09:27:42.994688] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:05.335 09:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:05.593 malloc0 00:17:05.594 09:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:05.852 09:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.hu81noV0jy 00:17:06.111 09:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:06.370 09:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.hu81noV0jy 00:17:16.342 Initializing NVMe Controllers 00:17:16.342 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:16.342 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:16.342 Initialization complete. Launching workers. 00:17:16.342 ======================================================== 00:17:16.342 Latency(us) 00:17:16.342 Device Information : IOPS MiB/s Average min max 00:17:16.342 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13395.03 52.32 4778.54 1019.82 16196.99 00:17:16.342 ======================================================== 00:17:16.342 Total : 13395.03 52.32 4778.54 1019.82 16196.99 00:17:16.342 00:17:16.601 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hu81noV0jy 00:17:16.601 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:16.601 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:16.601 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:16.601 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.hu81noV0jy 00:17:16.601 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:16.601 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71082 00:17:16.601 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:16.601 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:16.601 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71082 /var/tmp/bdevperf.sock 00:17:16.601 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71082 ']' 00:17:16.601 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:16.601 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:16.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:16.601 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:16.601 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:16.601 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:16.601 [2024-12-09 09:27:54.119160] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:17:16.601 [2024-12-09 09:27:54.119236] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71082 ] 00:17:16.601 [2024-12-09 09:27:54.273004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.601 [2024-12-09 09:27:54.319166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:16.860 [2024-12-09 09:27:54.360371] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:17.440 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:17.440 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:17.440 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.hu81noV0jy 00:17:17.440 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:17.697 [2024-12-09 09:27:55.351814] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:17.955 TLSTESTn1 00:17:17.955 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:17.955 Running I/O for 10 seconds... 00:17:19.819 5362.00 IOPS, 20.95 MiB/s [2024-12-09T09:27:58.915Z] 5394.00 IOPS, 21.07 MiB/s [2024-12-09T09:27:59.848Z] 5408.33 IOPS, 21.13 MiB/s [2024-12-09T09:28:00.782Z] 5419.25 IOPS, 21.17 MiB/s [2024-12-09T09:28:01.779Z] 5416.20 IOPS, 21.16 MiB/s [2024-12-09T09:28:02.713Z] 5420.50 IOPS, 21.17 MiB/s [2024-12-09T09:28:03.650Z] 5420.43 IOPS, 21.17 MiB/s [2024-12-09T09:28:04.585Z] 5419.00 IOPS, 21.17 MiB/s [2024-12-09T09:28:05.518Z] 5413.78 IOPS, 21.15 MiB/s [2024-12-09T09:28:05.775Z] 5434.50 IOPS, 21.23 MiB/s 00:17:28.052 Latency(us) 00:17:28.052 [2024-12-09T09:28:05.775Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:28.052 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:28.052 Verification LBA range: start 0x0 length 0x2000 00:17:28.052 TLSTESTn1 : 10.02 5436.59 21.24 0.00 0.00 23494.05 4026.91 22003.25 00:17:28.052 [2024-12-09T09:28:05.775Z] =================================================================================================================== 00:17:28.052 [2024-12-09T09:28:05.775Z] Total : 5436.59 21.24 0.00 0.00 23494.05 4026.91 22003.25 00:17:28.052 { 00:17:28.052 "results": [ 00:17:28.052 { 00:17:28.052 "job": "TLSTESTn1", 00:17:28.052 "core_mask": "0x4", 00:17:28.052 "workload": "verify", 00:17:28.052 "status": "finished", 00:17:28.052 "verify_range": { 00:17:28.052 "start": 0, 00:17:28.052 "length": 8192 00:17:28.052 }, 00:17:28.052 "queue_depth": 128, 00:17:28.052 "io_size": 4096, 00:17:28.052 "runtime": 10.019327, 00:17:28.052 "iops": 5436.59269729394, 00:17:28.052 "mibps": 21.23669022380445, 00:17:28.052 "io_failed": 0, 00:17:28.052 "io_timeout": 0, 00:17:28.052 "avg_latency_us": 23494.049738031637, 00:17:28.052 "min_latency_us": 4026.910843373494, 00:17:28.052 "max_latency_us": 22003.25140562249 00:17:28.052 } 00:17:28.052 ], 00:17:28.052 "core_count": 1 00:17:28.052 } 00:17:28.052 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:28.052 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71082 00:17:28.052 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71082 ']' 00:17:28.052 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71082 00:17:28.052 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:28.052 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:28.052 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71082 00:17:28.052 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:28.052 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:28.052 killing process with pid 71082 00:17:28.052 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71082' 00:17:28.052 Received shutdown signal, test time was about 10.000000 seconds 00:17:28.052 00:17:28.052 Latency(us) 00:17:28.052 [2024-12-09T09:28:05.775Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:28.052 [2024-12-09T09:28:05.775Z] =================================================================================================================== 00:17:28.052 [2024-12-09T09:28:05.775Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:28.052 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71082 00:17:28.052 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71082 00:17:28.052 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gOVsKITlaA 00:17:28.052 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:28.052 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gOVsKITlaA 00:17:28.052 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:28.052 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:28.052 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:28.052 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:28.052 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gOVsKITlaA 00:17:28.053 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:28.053 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:28.053 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:28.053 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.gOVsKITlaA 00:17:28.053 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:28.053 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71215 00:17:28.053 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:28.053 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71215 /var/tmp/bdevperf.sock 00:17:28.053 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71215 ']' 00:17:28.053 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:28.053 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:28.053 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:28.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:28.053 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:28.053 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:28.053 09:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:28.310 [2024-12-09 09:28:05.818780] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:17:28.310 [2024-12-09 09:28:05.818845] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71215 ] 00:17:28.310 [2024-12-09 09:28:05.965649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.310 [2024-12-09 09:28:06.016690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:28.567 [2024-12-09 09:28:06.058303] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:29.131 09:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:29.131 09:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:29.131 09:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gOVsKITlaA 00:17:29.388 09:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:29.388 [2024-12-09 09:28:07.102410] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:29.646 [2024-12-09 09:28:07.111714] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:29.646 [2024-12-09 09:28:07.111775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1af4030 (107): Transport endpoint is not connected 00:17:29.646 [2024-12-09 09:28:07.112760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1af4030 (9): Bad file descriptor 00:17:29.646 [2024-12-09 09:28:07.113756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:17:29.646 [2024-12-09 09:28:07.113796] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:17:29.646 [2024-12-09 09:28:07.113806] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:17:29.646 [2024-12-09 09:28:07.113820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:17:29.646 request: 00:17:29.646 { 00:17:29.646 "name": "TLSTEST", 00:17:29.646 "trtype": "tcp", 00:17:29.646 "traddr": "10.0.0.3", 00:17:29.646 "adrfam": "ipv4", 00:17:29.646 "trsvcid": "4420", 00:17:29.646 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:29.646 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:29.646 "prchk_reftag": false, 00:17:29.646 "prchk_guard": false, 00:17:29.646 "hdgst": false, 00:17:29.646 "ddgst": false, 00:17:29.646 "psk": "key0", 00:17:29.646 "allow_unrecognized_csi": false, 00:17:29.646 "method": "bdev_nvme_attach_controller", 00:17:29.646 "req_id": 1 00:17:29.646 } 00:17:29.646 Got JSON-RPC error response 00:17:29.646 response: 00:17:29.646 { 00:17:29.646 "code": -5, 00:17:29.646 "message": "Input/output error" 00:17:29.646 } 00:17:29.646 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71215 00:17:29.646 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71215 ']' 00:17:29.646 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71215 00:17:29.646 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:29.646 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:29.646 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71215 00:17:29.646 killing process with pid 71215 00:17:29.646 Received shutdown signal, test time was about 10.000000 seconds 00:17:29.646 00:17:29.646 Latency(us) 00:17:29.646 [2024-12-09T09:28:07.369Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:29.646 [2024-12-09T09:28:07.369Z] =================================================================================================================== 00:17:29.646 [2024-12-09T09:28:07.369Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:29.646 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:29.646 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:29.646 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71215' 00:17:29.646 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71215 00:17:29.646 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71215 00:17:29.646 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:29.646 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:29.646 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:29.646 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:29.646 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:29.646 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.hu81noV0jy 00:17:29.646 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:29.647 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.hu81noV0jy 00:17:29.647 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:29.647 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:29.647 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:29.647 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:29.647 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.hu81noV0jy 00:17:29.647 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:29.647 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:29.647 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:29.647 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.hu81noV0jy 00:17:29.647 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:29.647 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71245 00:17:29.647 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:29.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:29.647 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:29.647 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71245 /var/tmp/bdevperf.sock 00:17:29.647 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71245 ']' 00:17:29.647 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:29.647 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:29.647 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:29.647 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:29.647 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:29.905 [2024-12-09 09:28:07.396056] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:17:29.905 [2024-12-09 09:28:07.396358] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71245 ] 00:17:29.905 [2024-12-09 09:28:07.532108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.905 [2024-12-09 09:28:07.581704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:29.905 [2024-12-09 09:28:07.623279] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:30.841 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:30.841 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:30.841 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.hu81noV0jy 00:17:30.841 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:17:31.100 [2024-12-09 09:28:08.675434] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:31.100 [2024-12-09 09:28:08.680087] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:31.100 [2024-12-09 09:28:08.680317] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:31.100 [2024-12-09 09:28:08.680390] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:31.100 [2024-12-09 09:28:08.680854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x742030 (107): Transport endpoint is not connected 00:17:31.100 [2024-12-09 09:28:08.681838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x742030 (9): Bad file descriptor 00:17:31.100 [2024-12-09 09:28:08.682836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:17:31.100 [2024-12-09 09:28:08.682856] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:17:31.100 [2024-12-09 09:28:08.682866] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:17:31.100 [2024-12-09 09:28:08.682880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:17:31.100 request: 00:17:31.100 { 00:17:31.100 "name": "TLSTEST", 00:17:31.100 "trtype": "tcp", 00:17:31.100 "traddr": "10.0.0.3", 00:17:31.100 "adrfam": "ipv4", 00:17:31.100 "trsvcid": "4420", 00:17:31.100 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:31.100 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:31.100 "prchk_reftag": false, 00:17:31.100 "prchk_guard": false, 00:17:31.100 "hdgst": false, 00:17:31.100 "ddgst": false, 00:17:31.100 "psk": "key0", 00:17:31.100 "allow_unrecognized_csi": false, 00:17:31.100 "method": "bdev_nvme_attach_controller", 00:17:31.100 "req_id": 1 00:17:31.100 } 00:17:31.100 Got JSON-RPC error response 00:17:31.100 response: 00:17:31.100 { 00:17:31.100 "code": -5, 00:17:31.100 "message": "Input/output error" 00:17:31.100 } 00:17:31.100 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71245 00:17:31.100 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71245 ']' 00:17:31.100 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71245 00:17:31.100 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:31.100 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:31.100 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71245 00:17:31.100 killing process with pid 71245 00:17:31.100 Received shutdown signal, test time was about 10.000000 seconds 00:17:31.100 00:17:31.100 Latency(us) 00:17:31.100 [2024-12-09T09:28:08.823Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:31.100 [2024-12-09T09:28:08.823Z] =================================================================================================================== 00:17:31.100 [2024-12-09T09:28:08.823Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:31.100 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:31.100 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:31.100 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71245' 00:17:31.100 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71245 00:17:31.100 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71245 00:17:31.360 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:31.360 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:31.360 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:31.360 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:31.360 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:31.360 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.hu81noV0jy 00:17:31.360 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:31.360 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.hu81noV0jy 00:17:31.360 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:31.360 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:31.360 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:31.360 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:31.360 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.hu81noV0jy 00:17:31.360 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:31.360 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:31.360 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:31.360 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.hu81noV0jy 00:17:31.360 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:31.360 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:31.360 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71270 00:17:31.360 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:31.360 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71270 /var/tmp/bdevperf.sock 00:17:31.360 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71270 ']' 00:17:31.360 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:31.360 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:31.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:31.360 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:31.360 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:31.360 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:31.360 [2024-12-09 09:28:08.954006] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:17:31.360 [2024-12-09 09:28:08.954082] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71270 ] 00:17:31.619 [2024-12-09 09:28:09.105269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.619 [2024-12-09 09:28:09.155217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:31.619 [2024-12-09 09:28:09.196554] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:32.184 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:32.184 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:32.184 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.hu81noV0jy 00:17:32.442 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:32.699 [2024-12-09 09:28:10.280217] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:32.699 [2024-12-09 09:28:10.286541] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:32.699 [2024-12-09 09:28:10.287877] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:32.699 [2024-12-09 09:28:10.287931] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:32.699 [2024-12-09 09:28:10.288623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x196e030 (107): Transport endpoint is not connected 00:17:32.699 [2024-12-09 09:28:10.289611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x196e030 (9): Bad file descriptor 00:17:32.699 [2024-12-09 09:28:10.290609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:17:32.699 [2024-12-09 09:28:10.290754] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:17:32.699 [2024-12-09 09:28:10.290768] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:17:32.699 [2024-12-09 09:28:10.290784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:17:32.699 request: 00:17:32.699 { 00:17:32.699 "name": "TLSTEST", 00:17:32.699 "trtype": "tcp", 00:17:32.699 "traddr": "10.0.0.3", 00:17:32.699 "adrfam": "ipv4", 00:17:32.699 "trsvcid": "4420", 00:17:32.699 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:32.699 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:32.699 "prchk_reftag": false, 00:17:32.699 "prchk_guard": false, 00:17:32.699 "hdgst": false, 00:17:32.699 "ddgst": false, 00:17:32.699 "psk": "key0", 00:17:32.699 "allow_unrecognized_csi": false, 00:17:32.699 "method": "bdev_nvme_attach_controller", 00:17:32.699 "req_id": 1 00:17:32.699 } 00:17:32.699 Got JSON-RPC error response 00:17:32.699 response: 00:17:32.699 { 00:17:32.699 "code": -5, 00:17:32.699 "message": "Input/output error" 00:17:32.699 } 00:17:32.699 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71270 00:17:32.699 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71270 ']' 00:17:32.699 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71270 00:17:32.699 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:32.699 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:32.699 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71270 00:17:32.699 killing process with pid 71270 00:17:32.699 Received shutdown signal, test time was about 10.000000 seconds 00:17:32.699 00:17:32.699 Latency(us) 00:17:32.699 [2024-12-09T09:28:10.422Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.699 [2024-12-09T09:28:10.422Z] =================================================================================================================== 00:17:32.699 [2024-12-09T09:28:10.422Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:32.699 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:32.699 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:32.699 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71270' 00:17:32.699 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71270 00:17:32.699 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71270 00:17:32.956 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:32.956 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:32.956 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:32.956 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:32.956 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:32.956 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:32.956 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:32.956 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:32.956 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:32.956 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:32.956 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:32.956 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:32.956 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:32.956 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:32.956 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:32.956 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:32.956 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:17:32.956 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:32.956 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71302 00:17:32.956 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:32.956 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:32.956 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71302 /var/tmp/bdevperf.sock 00:17:32.956 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71302 ']' 00:17:32.956 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:32.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:32.956 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:32.956 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:32.957 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:32.957 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:32.957 [2024-12-09 09:28:10.561094] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:17:32.957 [2024-12-09 09:28:10.561163] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71302 ] 00:17:33.215 [2024-12-09 09:28:10.710000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.215 [2024-12-09 09:28:10.759340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:33.215 [2024-12-09 09:28:10.800580] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:33.784 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:33.784 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:33.784 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:17:34.084 [2024-12-09 09:28:11.632524] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:17:34.084 [2024-12-09 09:28:11.632722] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:17:34.084 request: 00:17:34.084 { 00:17:34.084 "name": "key0", 00:17:34.084 "path": "", 00:17:34.084 "method": "keyring_file_add_key", 00:17:34.084 "req_id": 1 00:17:34.084 } 00:17:34.084 Got JSON-RPC error response 00:17:34.084 response: 00:17:34.084 { 00:17:34.084 "code": -1, 00:17:34.084 "message": "Operation not permitted" 00:17:34.084 } 00:17:34.084 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:34.343 [2024-12-09 09:28:11.844331] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:34.343 [2024-12-09 09:28:11.844593] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:17:34.343 request: 00:17:34.343 { 00:17:34.343 "name": "TLSTEST", 00:17:34.343 "trtype": "tcp", 00:17:34.343 "traddr": "10.0.0.3", 00:17:34.343 "adrfam": "ipv4", 00:17:34.343 "trsvcid": "4420", 00:17:34.343 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:34.343 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:34.343 "prchk_reftag": false, 00:17:34.343 "prchk_guard": false, 00:17:34.343 "hdgst": false, 00:17:34.343 "ddgst": false, 00:17:34.343 "psk": "key0", 00:17:34.343 "allow_unrecognized_csi": false, 00:17:34.343 "method": "bdev_nvme_attach_controller", 00:17:34.343 "req_id": 1 00:17:34.343 } 00:17:34.343 Got JSON-RPC error response 00:17:34.343 response: 00:17:34.343 { 00:17:34.343 "code": -126, 00:17:34.343 "message": "Required key not available" 00:17:34.343 } 00:17:34.343 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71302 00:17:34.344 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71302 ']' 00:17:34.344 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71302 00:17:34.344 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:34.344 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:34.344 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71302 00:17:34.344 killing process with pid 71302 00:17:34.344 Received shutdown signal, test time was about 10.000000 seconds 00:17:34.344 00:17:34.344 Latency(us) 00:17:34.344 [2024-12-09T09:28:12.067Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:34.344 [2024-12-09T09:28:12.067Z] =================================================================================================================== 00:17:34.344 [2024-12-09T09:28:12.067Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:34.344 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:34.344 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:34.344 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71302' 00:17:34.344 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71302 00:17:34.344 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71302 00:17:34.602 09:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:34.602 09:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:34.602 09:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:34.602 09:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:34.602 09:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:34.602 09:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 70855 00:17:34.602 09:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 70855 ']' 00:17:34.602 09:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 70855 00:17:34.602 09:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:34.602 09:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:34.602 09:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70855 00:17:34.602 killing process with pid 70855 00:17:34.602 09:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:34.602 09:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:34.602 09:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70855' 00:17:34.602 09:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 70855 00:17:34.602 09:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 70855 00:17:34.602 09:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:17:34.602 09:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:17:34.602 09:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:34.602 09:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:34.602 09:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:34.602 09:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:17:34.602 09:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:34.602 09:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:34.860 09:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:17:34.860 09:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.VgAjhtmVhL 00:17:34.860 09:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:34.860 09:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.VgAjhtmVhL 00:17:34.860 09:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:17:34.860 09:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:34.860 09:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:34.860 09:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:34.860 09:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71341 00:17:34.860 09:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71341 00:17:34.860 09:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:34.860 09:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71341 ']' 00:17:34.860 09:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.860 09:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:34.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:34.860 09:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.860 09:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:34.860 09:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:34.860 [2024-12-09 09:28:12.406297] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:17:34.860 [2024-12-09 09:28:12.406363] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:34.861 [2024-12-09 09:28:12.547250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.118 [2024-12-09 09:28:12.598132] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:35.118 [2024-12-09 09:28:12.598424] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:35.118 [2024-12-09 09:28:12.598444] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:35.118 [2024-12-09 09:28:12.598452] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:35.118 [2024-12-09 09:28:12.598478] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:35.118 [2024-12-09 09:28:12.598763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:35.118 [2024-12-09 09:28:12.640402] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:35.684 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:35.684 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:35.684 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:35.684 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:35.684 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:35.684 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:35.684 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.VgAjhtmVhL 00:17:35.684 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.VgAjhtmVhL 00:17:35.684 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:35.942 [2024-12-09 09:28:13.536504] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:35.942 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:36.201 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:17:36.460 [2024-12-09 09:28:13.971853] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:36.460 [2024-12-09 09:28:13.972065] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:36.460 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:36.719 malloc0 00:17:36.719 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:36.719 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.VgAjhtmVhL 00:17:36.977 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:37.236 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.VgAjhtmVhL 00:17:37.236 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:37.236 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:37.236 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:37.236 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.VgAjhtmVhL 00:17:37.236 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:37.236 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71396 00:17:37.236 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:37.236 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:37.236 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71396 /var/tmp/bdevperf.sock 00:17:37.236 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71396 ']' 00:17:37.236 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:37.236 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:37.236 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:37.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:37.236 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:37.236 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:37.236 [2024-12-09 09:28:14.856532] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:17:37.236 [2024-12-09 09:28:14.856609] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71396 ] 00:17:37.495 [2024-12-09 09:28:15.127243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.495 [2024-12-09 09:28:15.179210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:37.755 [2024-12-09 09:28:15.220685] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:38.013 09:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:38.013 09:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:38.013 09:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.VgAjhtmVhL 00:17:38.271 09:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:38.530 [2024-12-09 09:28:16.126273] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:38.530 TLSTESTn1 00:17:38.530 09:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:38.789 Running I/O for 10 seconds... 00:17:40.664 5419.00 IOPS, 21.17 MiB/s [2024-12-09T09:28:19.325Z] 5416.00 IOPS, 21.16 MiB/s [2024-12-09T09:28:20.701Z] 5408.33 IOPS, 21.13 MiB/s [2024-12-09T09:28:21.637Z] 5407.75 IOPS, 21.12 MiB/s [2024-12-09T09:28:22.570Z] 5406.40 IOPS, 21.12 MiB/s [2024-12-09T09:28:23.503Z] 5406.83 IOPS, 21.12 MiB/s [2024-12-09T09:28:24.438Z] 5411.86 IOPS, 21.14 MiB/s [2024-12-09T09:28:25.375Z] 5436.88 IOPS, 21.24 MiB/s [2024-12-09T09:28:26.753Z] 5434.22 IOPS, 21.23 MiB/s [2024-12-09T09:28:26.753Z] 5432.90 IOPS, 21.22 MiB/s 00:17:49.030 Latency(us) 00:17:49.030 [2024-12-09T09:28:26.753Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.030 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:49.030 Verification LBA range: start 0x0 length 0x2000 00:17:49.030 TLSTESTn1 : 10.01 5438.37 21.24 0.00 0.00 23501.95 4711.22 22319.09 00:17:49.030 [2024-12-09T09:28:26.753Z] =================================================================================================================== 00:17:49.030 [2024-12-09T09:28:26.753Z] Total : 5438.37 21.24 0.00 0.00 23501.95 4711.22 22319.09 00:17:49.030 { 00:17:49.030 "results": [ 00:17:49.030 { 00:17:49.030 "job": "TLSTESTn1", 00:17:49.030 "core_mask": "0x4", 00:17:49.030 "workload": "verify", 00:17:49.030 "status": "finished", 00:17:49.030 "verify_range": { 00:17:49.030 "start": 0, 00:17:49.030 "length": 8192 00:17:49.030 }, 00:17:49.030 "queue_depth": 128, 00:17:49.030 "io_size": 4096, 00:17:49.030 "runtime": 10.012926, 00:17:49.030 "iops": 5438.370362469472, 00:17:49.030 "mibps": 21.243634228396374, 00:17:49.030 "io_failed": 0, 00:17:49.030 "io_timeout": 0, 00:17:49.030 "avg_latency_us": 23501.94791981678, 00:17:49.030 "min_latency_us": 4711.22248995984, 00:17:49.030 "max_latency_us": 22319.087550200802 00:17:49.030 } 00:17:49.030 ], 00:17:49.030 "core_count": 1 00:17:49.030 } 00:17:49.030 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:49.030 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71396 00:17:49.030 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71396 ']' 00:17:49.030 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71396 00:17:49.030 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:49.030 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:49.030 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71396 00:17:49.030 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:49.030 killing process with pid 71396 00:17:49.030 Received shutdown signal, test time was about 10.000000 seconds 00:17:49.030 00:17:49.030 Latency(us) 00:17:49.030 [2024-12-09T09:28:26.753Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.030 [2024-12-09T09:28:26.753Z] =================================================================================================================== 00:17:49.030 [2024-12-09T09:28:26.753Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:49.030 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:49.030 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71396' 00:17:49.030 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71396 00:17:49.030 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71396 00:17:49.030 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.VgAjhtmVhL 00:17:49.030 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.VgAjhtmVhL 00:17:49.030 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:49.030 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.VgAjhtmVhL 00:17:49.030 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:49.030 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:49.030 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:49.030 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:49.030 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.VgAjhtmVhL 00:17:49.030 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:49.030 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:49.030 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:49.030 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.VgAjhtmVhL 00:17:49.030 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:49.030 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:49.030 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71532 00:17:49.030 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:49.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:49.030 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71532 /var/tmp/bdevperf.sock 00:17:49.030 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71532 ']' 00:17:49.030 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:49.030 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:49.030 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:49.030 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:49.030 09:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:49.030 [2024-12-09 09:28:26.614058] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:17:49.030 [2024-12-09 09:28:26.614138] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71532 ] 00:17:49.289 [2024-12-09 09:28:26.763531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.289 [2024-12-09 09:28:26.813641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:49.289 [2024-12-09 09:28:26.855006] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:49.855 09:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:49.855 09:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:49.855 09:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.VgAjhtmVhL 00:17:50.112 [2024-12-09 09:28:27.703364] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.VgAjhtmVhL': 0100666 00:17:50.112 [2024-12-09 09:28:27.703410] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:17:50.112 request: 00:17:50.112 { 00:17:50.112 "name": "key0", 00:17:50.112 "path": "/tmp/tmp.VgAjhtmVhL", 00:17:50.112 "method": "keyring_file_add_key", 00:17:50.112 "req_id": 1 00:17:50.112 } 00:17:50.112 Got JSON-RPC error response 00:17:50.112 response: 00:17:50.112 { 00:17:50.112 "code": -1, 00:17:50.112 "message": "Operation not permitted" 00:17:50.112 } 00:17:50.112 09:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:50.370 [2024-12-09 09:28:27.895189] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:50.370 [2024-12-09 09:28:27.895252] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:17:50.370 request: 00:17:50.370 { 00:17:50.370 "name": "TLSTEST", 00:17:50.370 "trtype": "tcp", 00:17:50.370 "traddr": "10.0.0.3", 00:17:50.370 "adrfam": "ipv4", 00:17:50.370 "trsvcid": "4420", 00:17:50.370 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:50.370 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:50.370 "prchk_reftag": false, 00:17:50.370 "prchk_guard": false, 00:17:50.370 "hdgst": false, 00:17:50.370 "ddgst": false, 00:17:50.370 "psk": "key0", 00:17:50.370 "allow_unrecognized_csi": false, 00:17:50.370 "method": "bdev_nvme_attach_controller", 00:17:50.370 "req_id": 1 00:17:50.370 } 00:17:50.370 Got JSON-RPC error response 00:17:50.370 response: 00:17:50.370 { 00:17:50.370 "code": -126, 00:17:50.370 "message": "Required key not available" 00:17:50.370 } 00:17:50.370 09:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71532 00:17:50.370 09:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71532 ']' 00:17:50.370 09:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71532 00:17:50.370 09:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:50.370 09:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:50.370 09:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71532 00:17:50.370 killing process with pid 71532 00:17:50.370 Received shutdown signal, test time was about 10.000000 seconds 00:17:50.370 00:17:50.370 Latency(us) 00:17:50.370 [2024-12-09T09:28:28.093Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.370 [2024-12-09T09:28:28.093Z] =================================================================================================================== 00:17:50.370 [2024-12-09T09:28:28.093Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:50.370 09:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:50.370 09:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:50.370 09:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71532' 00:17:50.370 09:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71532 00:17:50.370 09:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71532 00:17:50.628 09:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:50.628 09:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:50.629 09:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:50.629 09:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:50.629 09:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:50.629 09:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 71341 00:17:50.629 09:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71341 ']' 00:17:50.629 09:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71341 00:17:50.629 09:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:50.629 09:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:50.629 09:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71341 00:17:50.629 09:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:50.629 09:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:50.629 09:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71341' 00:17:50.629 killing process with pid 71341 00:17:50.629 09:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71341 00:17:50.629 09:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71341 00:17:50.887 09:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:17:50.887 09:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:50.887 09:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:50.887 09:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:50.887 09:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71566 00:17:50.887 09:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71566 00:17:50.887 09:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:50.887 09:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71566 ']' 00:17:50.887 09:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.887 09:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:50.887 09:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.887 09:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:50.887 09:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:50.887 [2024-12-09 09:28:28.514856] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:17:50.887 [2024-12-09 09:28:28.514922] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:51.146 [2024-12-09 09:28:28.665853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.146 [2024-12-09 09:28:28.727681] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:51.146 [2024-12-09 09:28:28.727745] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:51.146 [2024-12-09 09:28:28.727756] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:51.146 [2024-12-09 09:28:28.727764] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:51.146 [2024-12-09 09:28:28.727771] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:51.146 [2024-12-09 09:28:28.728152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:51.146 [2024-12-09 09:28:28.799708] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:51.714 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:51.714 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:51.714 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:51.714 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:51.714 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:51.714 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:51.714 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.VgAjhtmVhL 00:17:51.714 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:51.714 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.VgAjhtmVhL 00:17:51.714 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:17:51.973 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:51.973 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:17:51.973 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:51.973 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.VgAjhtmVhL 00:17:51.973 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.VgAjhtmVhL 00:17:51.973 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:51.973 [2024-12-09 09:28:29.639206] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:51.973 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:52.231 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:17:52.490 [2024-12-09 09:28:30.074634] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:52.490 [2024-12-09 09:28:30.074919] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:52.490 09:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:52.748 malloc0 00:17:52.748 09:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:53.006 09:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.VgAjhtmVhL 00:17:53.006 [2024-12-09 09:28:30.704546] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.VgAjhtmVhL': 0100666 00:17:53.006 [2024-12-09 09:28:30.704599] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:17:53.006 request: 00:17:53.006 { 00:17:53.006 "name": "key0", 00:17:53.006 "path": "/tmp/tmp.VgAjhtmVhL", 00:17:53.006 "method": "keyring_file_add_key", 00:17:53.006 "req_id": 1 00:17:53.006 } 00:17:53.006 Got JSON-RPC error response 00:17:53.006 response: 00:17:53.006 { 00:17:53.006 "code": -1, 00:17:53.006 "message": "Operation not permitted" 00:17:53.006 } 00:17:53.006 09:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:53.264 [2024-12-09 09:28:30.920255] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:17:53.264 [2024-12-09 09:28:30.920319] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:17:53.264 request: 00:17:53.264 { 00:17:53.264 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:53.264 "host": "nqn.2016-06.io.spdk:host1", 00:17:53.264 "psk": "key0", 00:17:53.264 "method": "nvmf_subsystem_add_host", 00:17:53.264 "req_id": 1 00:17:53.264 } 00:17:53.264 Got JSON-RPC error response 00:17:53.264 response: 00:17:53.264 { 00:17:53.264 "code": -32603, 00:17:53.264 "message": "Internal error" 00:17:53.264 } 00:17:53.264 09:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:53.264 09:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:53.264 09:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:53.264 09:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:53.264 09:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 71566 00:17:53.264 09:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71566 ']' 00:17:53.264 09:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71566 00:17:53.264 09:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:53.264 09:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:53.264 09:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71566 00:17:53.522 09:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:53.522 killing process with pid 71566 00:17:53.522 09:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:53.522 09:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71566' 00:17:53.522 09:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71566 00:17:53.522 09:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71566 00:17:53.780 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.VgAjhtmVhL 00:17:53.780 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:17:53.780 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:53.780 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:53.780 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:53.780 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71630 00:17:53.780 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71630 00:17:53.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:53.780 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71630 ']' 00:17:53.780 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.780 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:53.780 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.780 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:53.780 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:53.780 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:53.780 [2024-12-09 09:28:31.337048] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:17:53.780 [2024-12-09 09:28:31.337331] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:53.780 [2024-12-09 09:28:31.491802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.039 [2024-12-09 09:28:31.551081] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:54.039 [2024-12-09 09:28:31.551134] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:54.039 [2024-12-09 09:28:31.551144] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:54.039 [2024-12-09 09:28:31.551153] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:54.039 [2024-12-09 09:28:31.551160] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:54.039 [2024-12-09 09:28:31.551542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:54.039 [2024-12-09 09:28:31.624370] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:54.685 09:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:54.685 09:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:54.685 09:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:54.685 09:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:54.685 09:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:54.685 09:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:54.685 09:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.VgAjhtmVhL 00:17:54.685 09:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.VgAjhtmVhL 00:17:54.685 09:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:54.945 [2024-12-09 09:28:32.444507] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:54.945 09:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:55.204 09:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:17:55.204 [2024-12-09 09:28:32.875888] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:55.204 [2024-12-09 09:28:32.876343] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:55.204 09:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:55.463 malloc0 00:17:55.463 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:55.723 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.VgAjhtmVhL 00:17:55.982 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:55.982 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:55.982 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=71691 00:17:55.982 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:55.982 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 71691 /var/tmp/bdevperf.sock 00:17:55.982 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71691 ']' 00:17:55.982 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:55.982 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:55.982 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:55.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:55.982 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:55.982 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:56.241 [2024-12-09 09:28:33.749734] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:17:56.241 [2024-12-09 09:28:33.749972] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71691 ] 00:17:56.241 [2024-12-09 09:28:33.899260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.241 [2024-12-09 09:28:33.948387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:56.501 [2024-12-09 09:28:33.990356] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:57.069 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:57.069 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:57.069 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.VgAjhtmVhL 00:17:57.329 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:57.329 [2024-12-09 09:28:35.002685] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:57.588 TLSTESTn1 00:17:57.588 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:17:57.848 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:17:57.848 "subsystems": [ 00:17:57.848 { 00:17:57.848 "subsystem": "keyring", 00:17:57.848 "config": [ 00:17:57.848 { 00:17:57.848 "method": "keyring_file_add_key", 00:17:57.848 "params": { 00:17:57.848 "name": "key0", 00:17:57.848 "path": "/tmp/tmp.VgAjhtmVhL" 00:17:57.848 } 00:17:57.848 } 00:17:57.848 ] 00:17:57.848 }, 00:17:57.848 { 00:17:57.848 "subsystem": "iobuf", 00:17:57.848 "config": [ 00:17:57.848 { 00:17:57.848 "method": "iobuf_set_options", 00:17:57.848 "params": { 00:17:57.848 "small_pool_count": 8192, 00:17:57.848 "large_pool_count": 1024, 00:17:57.848 "small_bufsize": 8192, 00:17:57.848 "large_bufsize": 135168, 00:17:57.848 "enable_numa": false 00:17:57.848 } 00:17:57.848 } 00:17:57.848 ] 00:17:57.848 }, 00:17:57.848 { 00:17:57.848 "subsystem": "sock", 00:17:57.848 "config": [ 00:17:57.848 { 00:17:57.848 "method": "sock_set_default_impl", 00:17:57.848 "params": { 00:17:57.848 "impl_name": "uring" 00:17:57.848 } 00:17:57.848 }, 00:17:57.848 { 00:17:57.848 "method": "sock_impl_set_options", 00:17:57.848 "params": { 00:17:57.848 "impl_name": "ssl", 00:17:57.848 "recv_buf_size": 4096, 00:17:57.848 "send_buf_size": 4096, 00:17:57.848 "enable_recv_pipe": true, 00:17:57.848 "enable_quickack": false, 00:17:57.848 "enable_placement_id": 0, 00:17:57.848 "enable_zerocopy_send_server": true, 00:17:57.848 "enable_zerocopy_send_client": false, 00:17:57.848 "zerocopy_threshold": 0, 00:17:57.848 "tls_version": 0, 00:17:57.848 "enable_ktls": false 00:17:57.848 } 00:17:57.848 }, 00:17:57.848 { 00:17:57.848 "method": "sock_impl_set_options", 00:17:57.848 "params": { 00:17:57.848 "impl_name": "posix", 00:17:57.848 "recv_buf_size": 2097152, 00:17:57.848 "send_buf_size": 2097152, 00:17:57.848 "enable_recv_pipe": true, 00:17:57.848 "enable_quickack": false, 00:17:57.848 "enable_placement_id": 0, 00:17:57.848 "enable_zerocopy_send_server": true, 00:17:57.848 "enable_zerocopy_send_client": false, 00:17:57.848 "zerocopy_threshold": 0, 00:17:57.848 "tls_version": 0, 00:17:57.848 "enable_ktls": false 00:17:57.848 } 00:17:57.848 }, 00:17:57.848 { 00:17:57.848 "method": "sock_impl_set_options", 00:17:57.848 "params": { 00:17:57.848 "impl_name": "uring", 00:17:57.848 "recv_buf_size": 2097152, 00:17:57.848 "send_buf_size": 2097152, 00:17:57.848 "enable_recv_pipe": true, 00:17:57.848 "enable_quickack": false, 00:17:57.848 "enable_placement_id": 0, 00:17:57.848 "enable_zerocopy_send_server": false, 00:17:57.848 "enable_zerocopy_send_client": false, 00:17:57.848 "zerocopy_threshold": 0, 00:17:57.848 "tls_version": 0, 00:17:57.848 "enable_ktls": false 00:17:57.848 } 00:17:57.848 } 00:17:57.848 ] 00:17:57.848 }, 00:17:57.848 { 00:17:57.848 "subsystem": "vmd", 00:17:57.848 "config": [] 00:17:57.848 }, 00:17:57.848 { 00:17:57.848 "subsystem": "accel", 00:17:57.848 "config": [ 00:17:57.848 { 00:17:57.848 "method": "accel_set_options", 00:17:57.848 "params": { 00:17:57.848 "small_cache_size": 128, 00:17:57.848 "large_cache_size": 16, 00:17:57.848 "task_count": 2048, 00:17:57.848 "sequence_count": 2048, 00:17:57.848 "buf_count": 2048 00:17:57.848 } 00:17:57.848 } 00:17:57.848 ] 00:17:57.848 }, 00:17:57.848 { 00:17:57.848 "subsystem": "bdev", 00:17:57.848 "config": [ 00:17:57.848 { 00:17:57.848 "method": "bdev_set_options", 00:17:57.848 "params": { 00:17:57.848 "bdev_io_pool_size": 65535, 00:17:57.848 "bdev_io_cache_size": 256, 00:17:57.848 "bdev_auto_examine": true, 00:17:57.848 "iobuf_small_cache_size": 128, 00:17:57.848 "iobuf_large_cache_size": 16 00:17:57.848 } 00:17:57.848 }, 00:17:57.848 { 00:17:57.848 "method": "bdev_raid_set_options", 00:17:57.848 "params": { 00:17:57.848 "process_window_size_kb": 1024, 00:17:57.848 "process_max_bandwidth_mb_sec": 0 00:17:57.848 } 00:17:57.848 }, 00:17:57.848 { 00:17:57.848 "method": "bdev_iscsi_set_options", 00:17:57.848 "params": { 00:17:57.848 "timeout_sec": 30 00:17:57.848 } 00:17:57.848 }, 00:17:57.848 { 00:17:57.848 "method": "bdev_nvme_set_options", 00:17:57.848 "params": { 00:17:57.848 "action_on_timeout": "none", 00:17:57.848 "timeout_us": 0, 00:17:57.848 "timeout_admin_us": 0, 00:17:57.848 "keep_alive_timeout_ms": 10000, 00:17:57.848 "arbitration_burst": 0, 00:17:57.848 "low_priority_weight": 0, 00:17:57.848 "medium_priority_weight": 0, 00:17:57.848 "high_priority_weight": 0, 00:17:57.848 "nvme_adminq_poll_period_us": 10000, 00:17:57.848 "nvme_ioq_poll_period_us": 0, 00:17:57.848 "io_queue_requests": 0, 00:17:57.848 "delay_cmd_submit": true, 00:17:57.848 "transport_retry_count": 4, 00:17:57.848 "bdev_retry_count": 3, 00:17:57.848 "transport_ack_timeout": 0, 00:17:57.848 "ctrlr_loss_timeout_sec": 0, 00:17:57.848 "reconnect_delay_sec": 0, 00:17:57.848 "fast_io_fail_timeout_sec": 0, 00:17:57.848 "disable_auto_failback": false, 00:17:57.848 "generate_uuids": false, 00:17:57.848 "transport_tos": 0, 00:17:57.849 "nvme_error_stat": false, 00:17:57.849 "rdma_srq_size": 0, 00:17:57.849 "io_path_stat": false, 00:17:57.849 "allow_accel_sequence": false, 00:17:57.849 "rdma_max_cq_size": 0, 00:17:57.849 "rdma_cm_event_timeout_ms": 0, 00:17:57.849 "dhchap_digests": [ 00:17:57.849 "sha256", 00:17:57.849 "sha384", 00:17:57.849 "sha512" 00:17:57.849 ], 00:17:57.849 "dhchap_dhgroups": [ 00:17:57.849 "null", 00:17:57.849 "ffdhe2048", 00:17:57.849 "ffdhe3072", 00:17:57.849 "ffdhe4096", 00:17:57.849 "ffdhe6144", 00:17:57.849 "ffdhe8192" 00:17:57.849 ] 00:17:57.849 } 00:17:57.849 }, 00:17:57.849 { 00:17:57.849 "method": "bdev_nvme_set_hotplug", 00:17:57.849 "params": { 00:17:57.849 "period_us": 100000, 00:17:57.849 "enable": false 00:17:57.849 } 00:17:57.849 }, 00:17:57.849 { 00:17:57.849 "method": "bdev_malloc_create", 00:17:57.849 "params": { 00:17:57.849 "name": "malloc0", 00:17:57.849 "num_blocks": 8192, 00:17:57.849 "block_size": 4096, 00:17:57.849 "physical_block_size": 4096, 00:17:57.849 "uuid": "cf5e58d0-baf6-410f-82db-93b8bfa5164e", 00:17:57.849 "optimal_io_boundary": 0, 00:17:57.849 "md_size": 0, 00:17:57.849 "dif_type": 0, 00:17:57.849 "dif_is_head_of_md": false, 00:17:57.849 "dif_pi_format": 0 00:17:57.849 } 00:17:57.849 }, 00:17:57.849 { 00:17:57.849 "method": "bdev_wait_for_examine" 00:17:57.849 } 00:17:57.849 ] 00:17:57.849 }, 00:17:57.849 { 00:17:57.849 "subsystem": "nbd", 00:17:57.849 "config": [] 00:17:57.849 }, 00:17:57.849 { 00:17:57.849 "subsystem": "scheduler", 00:17:57.849 "config": [ 00:17:57.849 { 00:17:57.849 "method": "framework_set_scheduler", 00:17:57.849 "params": { 00:17:57.849 "name": "static" 00:17:57.849 } 00:17:57.849 } 00:17:57.849 ] 00:17:57.849 }, 00:17:57.849 { 00:17:57.849 "subsystem": "nvmf", 00:17:57.849 "config": [ 00:17:57.849 { 00:17:57.849 "method": "nvmf_set_config", 00:17:57.849 "params": { 00:17:57.849 "discovery_filter": "match_any", 00:17:57.849 "admin_cmd_passthru": { 00:17:57.849 "identify_ctrlr": false 00:17:57.849 }, 00:17:57.849 "dhchap_digests": [ 00:17:57.849 "sha256", 00:17:57.849 "sha384", 00:17:57.849 "sha512" 00:17:57.849 ], 00:17:57.849 "dhchap_dhgroups": [ 00:17:57.849 "null", 00:17:57.849 "ffdhe2048", 00:17:57.849 "ffdhe3072", 00:17:57.849 "ffdhe4096", 00:17:57.849 "ffdhe6144", 00:17:57.849 "ffdhe8192" 00:17:57.849 ] 00:17:57.849 } 00:17:57.849 }, 00:17:57.849 { 00:17:57.849 "method": "nvmf_set_max_subsystems", 00:17:57.849 "params": { 00:17:57.849 "max_subsystems": 1024 00:17:57.849 } 00:17:57.849 }, 00:17:57.849 { 00:17:57.849 "method": "nvmf_set_crdt", 00:17:57.849 "params": { 00:17:57.849 "crdt1": 0, 00:17:57.849 "crdt2": 0, 00:17:57.849 "crdt3": 0 00:17:57.849 } 00:17:57.849 }, 00:17:57.849 { 00:17:57.849 "method": "nvmf_create_transport", 00:17:57.849 "params": { 00:17:57.849 "trtype": "TCP", 00:17:57.849 "max_queue_depth": 128, 00:17:57.849 "max_io_qpairs_per_ctrlr": 127, 00:17:57.849 "in_capsule_data_size": 4096, 00:17:57.849 "max_io_size": 131072, 00:17:57.849 "io_unit_size": 131072, 00:17:57.849 "max_aq_depth": 128, 00:17:57.849 "num_shared_buffers": 511, 00:17:57.849 "buf_cache_size": 4294967295, 00:17:57.849 "dif_insert_or_strip": false, 00:17:57.849 "zcopy": false, 00:17:57.849 "c2h_success": false, 00:17:57.849 "sock_priority": 0, 00:17:57.849 "abort_timeout_sec": 1, 00:17:57.849 "ack_timeout": 0, 00:17:57.849 "data_wr_pool_size": 0 00:17:57.849 } 00:17:57.849 }, 00:17:57.849 { 00:17:57.849 "method": "nvmf_create_subsystem", 00:17:57.849 "params": { 00:17:57.849 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:57.849 "allow_any_host": false, 00:17:57.849 "serial_number": "SPDK00000000000001", 00:17:57.849 "model_number": "SPDK bdev Controller", 00:17:57.849 "max_namespaces": 10, 00:17:57.849 "min_cntlid": 1, 00:17:57.849 "max_cntlid": 65519, 00:17:57.849 "ana_reporting": false 00:17:57.849 } 00:17:57.849 }, 00:17:57.849 { 00:17:57.849 "method": "nvmf_subsystem_add_host", 00:17:57.849 "params": { 00:17:57.849 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:57.849 "host": "nqn.2016-06.io.spdk:host1", 00:17:57.849 "psk": "key0" 00:17:57.849 } 00:17:57.849 }, 00:17:57.849 { 00:17:57.849 "method": "nvmf_subsystem_add_ns", 00:17:57.849 "params": { 00:17:57.849 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:57.849 "namespace": { 00:17:57.849 "nsid": 1, 00:17:57.849 "bdev_name": "malloc0", 00:17:57.849 "nguid": "CF5E58D0BAF6410F82DB93B8BFA5164E", 00:17:57.849 "uuid": "cf5e58d0-baf6-410f-82db-93b8bfa5164e", 00:17:57.849 "no_auto_visible": false 00:17:57.849 } 00:17:57.849 } 00:17:57.849 }, 00:17:57.849 { 00:17:57.849 "method": "nvmf_subsystem_add_listener", 00:17:57.849 "params": { 00:17:57.849 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:57.849 "listen_address": { 00:17:57.849 "trtype": "TCP", 00:17:57.849 "adrfam": "IPv4", 00:17:57.849 "traddr": "10.0.0.3", 00:17:57.849 "trsvcid": "4420" 00:17:57.849 }, 00:17:57.849 "secure_channel": true 00:17:57.849 } 00:17:57.849 } 00:17:57.849 ] 00:17:57.849 } 00:17:57.849 ] 00:17:57.849 }' 00:17:57.849 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:58.108 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:17:58.108 "subsystems": [ 00:17:58.109 { 00:17:58.109 "subsystem": "keyring", 00:17:58.109 "config": [ 00:17:58.109 { 00:17:58.109 "method": "keyring_file_add_key", 00:17:58.109 "params": { 00:17:58.109 "name": "key0", 00:17:58.109 "path": "/tmp/tmp.VgAjhtmVhL" 00:17:58.109 } 00:17:58.109 } 00:17:58.109 ] 00:17:58.109 }, 00:17:58.109 { 00:17:58.109 "subsystem": "iobuf", 00:17:58.109 "config": [ 00:17:58.109 { 00:17:58.109 "method": "iobuf_set_options", 00:17:58.109 "params": { 00:17:58.109 "small_pool_count": 8192, 00:17:58.109 "large_pool_count": 1024, 00:17:58.109 "small_bufsize": 8192, 00:17:58.109 "large_bufsize": 135168, 00:17:58.109 "enable_numa": false 00:17:58.109 } 00:17:58.109 } 00:17:58.109 ] 00:17:58.109 }, 00:17:58.109 { 00:17:58.109 "subsystem": "sock", 00:17:58.109 "config": [ 00:17:58.109 { 00:17:58.109 "method": "sock_set_default_impl", 00:17:58.109 "params": { 00:17:58.109 "impl_name": "uring" 00:17:58.109 } 00:17:58.109 }, 00:17:58.109 { 00:17:58.109 "method": "sock_impl_set_options", 00:17:58.109 "params": { 00:17:58.109 "impl_name": "ssl", 00:17:58.109 "recv_buf_size": 4096, 00:17:58.109 "send_buf_size": 4096, 00:17:58.109 "enable_recv_pipe": true, 00:17:58.109 "enable_quickack": false, 00:17:58.109 "enable_placement_id": 0, 00:17:58.109 "enable_zerocopy_send_server": true, 00:17:58.109 "enable_zerocopy_send_client": false, 00:17:58.109 "zerocopy_threshold": 0, 00:17:58.109 "tls_version": 0, 00:17:58.109 "enable_ktls": false 00:17:58.109 } 00:17:58.109 }, 00:17:58.109 { 00:17:58.109 "method": "sock_impl_set_options", 00:17:58.109 "params": { 00:17:58.109 "impl_name": "posix", 00:17:58.109 "recv_buf_size": 2097152, 00:17:58.109 "send_buf_size": 2097152, 00:17:58.109 "enable_recv_pipe": true, 00:17:58.109 "enable_quickack": false, 00:17:58.109 "enable_placement_id": 0, 00:17:58.109 "enable_zerocopy_send_server": true, 00:17:58.109 "enable_zerocopy_send_client": false, 00:17:58.109 "zerocopy_threshold": 0, 00:17:58.109 "tls_version": 0, 00:17:58.109 "enable_ktls": false 00:17:58.109 } 00:17:58.109 }, 00:17:58.109 { 00:17:58.109 "method": "sock_impl_set_options", 00:17:58.109 "params": { 00:17:58.109 "impl_name": "uring", 00:17:58.109 "recv_buf_size": 2097152, 00:17:58.109 "send_buf_size": 2097152, 00:17:58.109 "enable_recv_pipe": true, 00:17:58.109 "enable_quickack": false, 00:17:58.109 "enable_placement_id": 0, 00:17:58.109 "enable_zerocopy_send_server": false, 00:17:58.109 "enable_zerocopy_send_client": false, 00:17:58.109 "zerocopy_threshold": 0, 00:17:58.109 "tls_version": 0, 00:17:58.109 "enable_ktls": false 00:17:58.109 } 00:17:58.109 } 00:17:58.109 ] 00:17:58.109 }, 00:17:58.109 { 00:17:58.109 "subsystem": "vmd", 00:17:58.109 "config": [] 00:17:58.109 }, 00:17:58.109 { 00:17:58.109 "subsystem": "accel", 00:17:58.109 "config": [ 00:17:58.109 { 00:17:58.109 "method": "accel_set_options", 00:17:58.109 "params": { 00:17:58.109 "small_cache_size": 128, 00:17:58.109 "large_cache_size": 16, 00:17:58.109 "task_count": 2048, 00:17:58.109 "sequence_count": 2048, 00:17:58.109 "buf_count": 2048 00:17:58.109 } 00:17:58.109 } 00:17:58.109 ] 00:17:58.109 }, 00:17:58.109 { 00:17:58.109 "subsystem": "bdev", 00:17:58.109 "config": [ 00:17:58.109 { 00:17:58.109 "method": "bdev_set_options", 00:17:58.109 "params": { 00:17:58.109 "bdev_io_pool_size": 65535, 00:17:58.109 "bdev_io_cache_size": 256, 00:17:58.109 "bdev_auto_examine": true, 00:17:58.109 "iobuf_small_cache_size": 128, 00:17:58.109 "iobuf_large_cache_size": 16 00:17:58.109 } 00:17:58.109 }, 00:17:58.109 { 00:17:58.109 "method": "bdev_raid_set_options", 00:17:58.109 "params": { 00:17:58.109 "process_window_size_kb": 1024, 00:17:58.109 "process_max_bandwidth_mb_sec": 0 00:17:58.109 } 00:17:58.109 }, 00:17:58.109 { 00:17:58.109 "method": "bdev_iscsi_set_options", 00:17:58.109 "params": { 00:17:58.109 "timeout_sec": 30 00:17:58.109 } 00:17:58.109 }, 00:17:58.109 { 00:17:58.109 "method": "bdev_nvme_set_options", 00:17:58.109 "params": { 00:17:58.109 "action_on_timeout": "none", 00:17:58.109 "timeout_us": 0, 00:17:58.109 "timeout_admin_us": 0, 00:17:58.109 "keep_alive_timeout_ms": 10000, 00:17:58.109 "arbitration_burst": 0, 00:17:58.109 "low_priority_weight": 0, 00:17:58.109 "medium_priority_weight": 0, 00:17:58.109 "high_priority_weight": 0, 00:17:58.109 "nvme_adminq_poll_period_us": 10000, 00:17:58.109 "nvme_ioq_poll_period_us": 0, 00:17:58.109 "io_queue_requests": 512, 00:17:58.109 "delay_cmd_submit": true, 00:17:58.109 "transport_retry_count": 4, 00:17:58.109 "bdev_retry_count": 3, 00:17:58.109 "transport_ack_timeout": 0, 00:17:58.109 "ctrlr_loss_timeout_sec": 0, 00:17:58.109 "reconnect_delay_sec": 0, 00:17:58.109 "fast_io_fail_timeout_sec": 0, 00:17:58.109 "disable_auto_failback": false, 00:17:58.109 "generate_uuids": false, 00:17:58.109 "transport_tos": 0, 00:17:58.109 "nvme_error_stat": false, 00:17:58.109 "rdma_srq_size": 0, 00:17:58.109 "io_path_stat": false, 00:17:58.109 "allow_accel_sequence": false, 00:17:58.109 "rdma_max_cq_size": 0, 00:17:58.109 "rdma_cm_event_timeout_ms": 0, 00:17:58.109 "dhchap_digests": [ 00:17:58.109 "sha256", 00:17:58.109 "sha384", 00:17:58.109 "sha512" 00:17:58.109 ], 00:17:58.109 "dhchap_dhgroups": [ 00:17:58.109 "null", 00:17:58.109 "ffdhe2048", 00:17:58.109 "ffdhe3072", 00:17:58.109 "ffdhe4096", 00:17:58.109 "ffdhe6144", 00:17:58.109 "ffdhe8192" 00:17:58.109 ] 00:17:58.109 } 00:17:58.109 }, 00:17:58.109 { 00:17:58.109 "method": "bdev_nvme_attach_controller", 00:17:58.109 "params": { 00:17:58.109 "name": "TLSTEST", 00:17:58.109 "trtype": "TCP", 00:17:58.109 "adrfam": "IPv4", 00:17:58.109 "traddr": "10.0.0.3", 00:17:58.109 "trsvcid": "4420", 00:17:58.110 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:58.110 "prchk_reftag": false, 00:17:58.110 "prchk_guard": false, 00:17:58.110 "ctrlr_loss_timeout_sec": 0, 00:17:58.110 "reconnect_delay_sec": 0, 00:17:58.110 "fast_io_fail_timeout_sec": 0, 00:17:58.110 "psk": "key0", 00:17:58.110 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:58.110 "hdgst": false, 00:17:58.110 "ddgst": false, 00:17:58.110 "multipath": "multipath" 00:17:58.110 } 00:17:58.110 }, 00:17:58.110 { 00:17:58.110 "method": "bdev_nvme_set_hotplug", 00:17:58.110 "params": { 00:17:58.110 "period_us": 100000, 00:17:58.110 "enable": false 00:17:58.110 } 00:17:58.110 }, 00:17:58.110 { 00:17:58.110 "method": "bdev_wait_for_examine" 00:17:58.110 } 00:17:58.110 ] 00:17:58.110 }, 00:17:58.110 { 00:17:58.110 "subsystem": "nbd", 00:17:58.110 "config": [] 00:17:58.110 } 00:17:58.110 ] 00:17:58.110 }' 00:17:58.110 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 71691 00:17:58.110 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71691 ']' 00:17:58.110 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71691 00:17:58.110 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:58.110 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:58.110 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71691 00:17:58.110 killing process with pid 71691 00:17:58.110 Received shutdown signal, test time was about 10.000000 seconds 00:17:58.110 00:17:58.110 Latency(us) 00:17:58.110 [2024-12-09T09:28:35.833Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:58.110 [2024-12-09T09:28:35.833Z] =================================================================================================================== 00:17:58.110 [2024-12-09T09:28:35.833Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:58.110 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:58.110 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:58.110 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71691' 00:17:58.110 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71691 00:17:58.110 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71691 00:17:58.369 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 71630 00:17:58.369 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71630 ']' 00:17:58.369 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71630 00:17:58.369 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:58.369 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:58.369 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71630 00:17:58.369 killing process with pid 71630 00:17:58.369 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:58.369 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:58.369 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71630' 00:17:58.369 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71630 00:17:58.369 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71630 00:17:58.628 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:17:58.628 "subsystems": [ 00:17:58.628 { 00:17:58.628 "subsystem": "keyring", 00:17:58.628 "config": [ 00:17:58.628 { 00:17:58.628 "method": "keyring_file_add_key", 00:17:58.628 "params": { 00:17:58.628 "name": "key0", 00:17:58.628 "path": "/tmp/tmp.VgAjhtmVhL" 00:17:58.628 } 00:17:58.628 } 00:17:58.628 ] 00:17:58.628 }, 00:17:58.628 { 00:17:58.628 "subsystem": "iobuf", 00:17:58.628 "config": [ 00:17:58.628 { 00:17:58.628 "method": "iobuf_set_options", 00:17:58.628 "params": { 00:17:58.628 "small_pool_count": 8192, 00:17:58.628 "large_pool_count": 1024, 00:17:58.628 "small_bufsize": 8192, 00:17:58.628 "large_bufsize": 135168, 00:17:58.628 "enable_numa": false 00:17:58.628 } 00:17:58.628 } 00:17:58.628 ] 00:17:58.628 }, 00:17:58.628 { 00:17:58.628 "subsystem": "sock", 00:17:58.628 "config": [ 00:17:58.628 { 00:17:58.628 "method": "sock_set_default_impl", 00:17:58.628 "params": { 00:17:58.628 "impl_name": "uring" 00:17:58.628 } 00:17:58.628 }, 00:17:58.628 { 00:17:58.628 "method": "sock_impl_set_options", 00:17:58.628 "params": { 00:17:58.628 "impl_name": "ssl", 00:17:58.628 "recv_buf_size": 4096, 00:17:58.628 "send_buf_size": 4096, 00:17:58.628 "enable_recv_pipe": true, 00:17:58.628 "enable_quickack": false, 00:17:58.628 "enable_placement_id": 0, 00:17:58.628 "enable_zerocopy_send_server": true, 00:17:58.628 "enable_zerocopy_send_client": false, 00:17:58.628 "zerocopy_threshold": 0, 00:17:58.628 "tls_version": 0, 00:17:58.628 "enable_ktls": false 00:17:58.628 } 00:17:58.628 }, 00:17:58.628 { 00:17:58.628 "method": "sock_impl_set_options", 00:17:58.628 "params": { 00:17:58.628 "impl_name": "posix", 00:17:58.628 "recv_buf_size": 2097152, 00:17:58.628 "send_buf_size": 2097152, 00:17:58.628 "enable_recv_pipe": true, 00:17:58.628 "enable_quickack": false, 00:17:58.628 "enable_placement_id": 0, 00:17:58.628 "enable_zerocopy_send_server": true, 00:17:58.628 "enable_zerocopy_send_client": false, 00:17:58.628 "zerocopy_threshold": 0, 00:17:58.628 "tls_version": 0, 00:17:58.628 "enable_ktls": false 00:17:58.628 } 00:17:58.628 }, 00:17:58.628 { 00:17:58.628 "method": "sock_impl_set_options", 00:17:58.628 "params": { 00:17:58.628 "impl_name": "uring", 00:17:58.628 "recv_buf_size": 2097152, 00:17:58.628 "send_buf_size": 2097152, 00:17:58.628 "enable_recv_pipe": true, 00:17:58.628 "enable_quickack": false, 00:17:58.628 "enable_placement_id": 0, 00:17:58.628 "enable_zerocopy_send_server": false, 00:17:58.628 "enable_zerocopy_send_client": false, 00:17:58.628 "zerocopy_threshold": 0, 00:17:58.628 "tls_version": 0, 00:17:58.628 "enable_ktls": false 00:17:58.628 } 00:17:58.628 } 00:17:58.628 ] 00:17:58.628 }, 00:17:58.628 { 00:17:58.628 "subsystem": "vmd", 00:17:58.628 "config": [] 00:17:58.628 }, 00:17:58.628 { 00:17:58.628 "subsystem": "accel", 00:17:58.628 "config": [ 00:17:58.628 { 00:17:58.628 "method": "accel_set_options", 00:17:58.628 "params": { 00:17:58.628 "small_cache_size": 128, 00:17:58.628 "large_cache_size": 16, 00:17:58.628 "task_count": 2048, 00:17:58.628 "sequence_count": 2048, 00:17:58.628 "buf_count": 2048 00:17:58.628 } 00:17:58.628 } 00:17:58.628 ] 00:17:58.628 }, 00:17:58.628 { 00:17:58.628 "subsystem": "bdev", 00:17:58.628 "config": [ 00:17:58.628 { 00:17:58.628 "method": "bdev_set_options", 00:17:58.628 "params": { 00:17:58.628 "bdev_io_pool_size": 65535, 00:17:58.628 "bdev_io_cache_size": 256, 00:17:58.628 "bdev_auto_examine": true, 00:17:58.628 "iobuf_small_cache_size": 128, 00:17:58.628 "iobuf_large_cache_size": 16 00:17:58.628 } 00:17:58.628 }, 00:17:58.628 { 00:17:58.628 "method": "bdev_raid_set_options", 00:17:58.628 "params": { 00:17:58.628 "process_window_size_kb": 1024, 00:17:58.628 "process_max_bandwidth_mb_sec": 0 00:17:58.628 } 00:17:58.628 }, 00:17:58.628 { 00:17:58.628 "method": "bdev_iscsi_set_options", 00:17:58.628 "params": { 00:17:58.628 "timeout_sec": 30 00:17:58.628 } 00:17:58.628 }, 00:17:58.628 { 00:17:58.628 "method": "bdev_nvme_set_options", 00:17:58.628 "params": { 00:17:58.628 "action_on_timeout": "none", 00:17:58.628 "timeout_us": 0, 00:17:58.628 "timeout_admin_us": 0, 00:17:58.628 "keep_alive_timeout_ms": 10000, 00:17:58.628 "arbitration_burst": 0, 00:17:58.628 "low_priority_weight": 0, 00:17:58.628 "medium_priority_weight": 0, 00:17:58.628 "high_priority_weight": 0, 00:17:58.628 "nvme_adminq_poll_period_us": 10000, 00:17:58.628 "nvme_ioq_poll_period_us": 0, 00:17:58.628 "io_queue_requests": 0, 00:17:58.628 "delay_cmd_submit": true, 00:17:58.628 "transport_retry_count": 4, 00:17:58.628 "bdev_retry_count": 3, 00:17:58.628 "transport_ack_timeout": 0, 00:17:58.628 "ctrlr_loss_timeout_sec": 0, 00:17:58.628 "reconnect_delay_sec": 0, 00:17:58.628 "fast_io_fail_timeout_sec": 0, 00:17:58.628 "disable_auto_failback": false, 00:17:58.628 "generate_uuids": false, 00:17:58.628 "transport_tos": 0, 00:17:58.628 "nvme_error_stat": false, 00:17:58.628 "rdma_srq_size": 0, 00:17:58.628 "io_path_stat": false, 00:17:58.628 "allow_accel_sequence": false, 00:17:58.628 "rdma_max_cq_size": 0, 00:17:58.628 "rdma_cm_event_timeout_ms": 0, 00:17:58.628 "dhchap_digests": [ 00:17:58.628 "sha256", 00:17:58.628 "sha384", 00:17:58.628 "sha512" 00:17:58.628 ], 00:17:58.628 "dhchap_dhgroups": [ 00:17:58.628 "null", 00:17:58.628 "ffdhe2048", 00:17:58.628 "ffdhe3072", 00:17:58.628 "ffdhe4096", 00:17:58.628 "ffdhe6144", 00:17:58.628 "ffdhe8192" 00:17:58.628 ] 00:17:58.628 } 00:17:58.628 }, 00:17:58.628 { 00:17:58.628 "method": "bdev_nvme_set_hotplug", 00:17:58.628 "params": { 00:17:58.629 "period_us": 100000, 00:17:58.629 "enable": false 00:17:58.629 } 00:17:58.629 }, 00:17:58.629 { 00:17:58.629 "method": "bdev_malloc_create", 00:17:58.629 "params": { 00:17:58.629 "name": "malloc0", 00:17:58.629 "num_blocks": 8192, 00:17:58.629 "block_size": 4096, 00:17:58.629 "physical_block_size": 4096, 00:17:58.629 "uuid": "cf5e58d0-baf6-410f-82db-93b8bfa5164e", 00:17:58.629 "optimal_io_boundary": 0, 00:17:58.629 "md_size": 0, 00:17:58.629 "dif_type": 0, 00:17:58.629 "dif_is_head_of_md": false, 00:17:58.629 "dif_pi_format": 0 00:17:58.629 } 00:17:58.629 }, 00:17:58.629 { 00:17:58.629 "method": "bdev_wait_for_examine" 00:17:58.629 } 00:17:58.629 ] 00:17:58.629 }, 00:17:58.629 { 00:17:58.629 "subsystem": "nbd", 00:17:58.629 "config": [] 00:17:58.629 }, 00:17:58.629 { 00:17:58.629 "subsystem": "scheduler", 00:17:58.629 "config": [ 00:17:58.629 { 00:17:58.629 "method": "framework_set_scheduler", 00:17:58.629 "params": { 00:17:58.629 "name": "static" 00:17:58.629 } 00:17:58.629 } 00:17:58.629 ] 00:17:58.629 }, 00:17:58.629 { 00:17:58.629 "subsystem": "nvmf", 00:17:58.629 "config": [ 00:17:58.629 { 00:17:58.629 "method": "nvmf_set_config", 00:17:58.629 "params": { 00:17:58.629 "discovery_filter": "match_any", 00:17:58.629 "admin_cmd_passthru": { 00:17:58.629 "identify_ctrlr": false 00:17:58.629 }, 00:17:58.629 "dhchap_digests": [ 00:17:58.629 "sha256", 00:17:58.629 "sha384", 00:17:58.629 "sha512" 00:17:58.629 ], 00:17:58.629 "dhchap_dhgroups": [ 00:17:58.629 "null", 00:17:58.629 "ffdhe2048", 00:17:58.629 "ffdhe3072", 00:17:58.629 "ffdhe4096", 00:17:58.629 "ffdhe6144", 00:17:58.629 "ffdhe8192" 00:17:58.629 ] 00:17:58.629 } 00:17:58.629 }, 00:17:58.629 { 00:17:58.629 "method": "nvmf_set_max_subsystems", 00:17:58.629 "params": { 00:17:58.629 "max_subsystems": 1024 00:17:58.629 } 00:17:58.629 }, 00:17:58.629 { 00:17:58.629 "method": "nvmf_set_crdt", 00:17:58.629 "params": { 00:17:58.629 "crdt1": 0, 00:17:58.629 "crdt2": 0, 00:17:58.629 "crdt3": 0 00:17:58.629 } 00:17:58.629 }, 00:17:58.629 { 00:17:58.629 "method": "nvmf_create_transport", 00:17:58.629 "params": { 00:17:58.629 "trtype": "TCP", 00:17:58.629 "max_queue_depth": 128, 00:17:58.629 "max_io_qpairs_per_ctrlr": 127, 00:17:58.629 "in_capsule_data_size": 4096, 00:17:58.629 "max_io_size": 131072, 00:17:58.629 "io_unit_size": 131072, 00:17:58.629 "max_aq_depth": 128, 00:17:58.629 "num_shared_buffers": 511, 00:17:58.629 "buf_cache_size": 4294967295, 00:17:58.629 "dif_insert_or_strip": false, 00:17:58.629 "zcopy": false, 00:17:58.629 "c2h_success": false, 00:17:58.629 "sock_priority": 0, 00:17:58.629 "abort_timeout_sec": 1, 00:17:58.629 "ack_timeout": 0, 00:17:58.629 "data_wr_pool_size": 0 00:17:58.629 } 00:17:58.629 }, 00:17:58.629 { 00:17:58.629 "method": "nvmf_create_subsystem", 00:17:58.629 "params": { 00:17:58.629 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:58.629 "allow_any_host": false, 00:17:58.629 "serial_number": "SPDK00000000000001", 00:17:58.629 "model_number": "SPDK bdev Controller", 00:17:58.629 "max_namespaces": 10, 00:17:58.629 "min_cntlid": 1, 00:17:58.629 "max_cntlid": 65519, 00:17:58.629 "ana_reporting": false 00:17:58.629 } 00:17:58.629 }, 00:17:58.629 { 00:17:58.629 "method": "nvmf_subsystem_add_host", 00:17:58.629 "params": { 00:17:58.629 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:58.629 "host": "nqn.2016-06.io.spdk:host1", 00:17:58.629 "psk": "key0" 00:17:58.629 } 00:17:58.629 }, 00:17:58.629 { 00:17:58.629 "method": "nvmf_subsystem_add_ns", 00:17:58.629 "params": { 00:17:58.629 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:58.629 "namespace": { 00:17:58.629 "nsid": 1, 00:17:58.629 "bdev_name": "malloc0", 00:17:58.629 "nguid": "CF5E58D0BAF6410F82DB93B8BFA5164E", 00:17:58.629 "uuid": "cf5e58d0-baf6-410f-82db-93b8bfa5164e", 00:17:58.629 "no_auto_visible": false 00:17:58.629 } 00:17:58.629 } 00:17:58.629 }, 00:17:58.629 { 00:17:58.629 "method": "nvmf_subsystem_add_listener", 00:17:58.629 "params": { 00:17:58.629 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:58.629 "listen_address": { 00:17:58.629 "trtype": "TCP", 00:17:58.629 "adrfam": "IPv4", 00:17:58.629 "traddr": "10.0.0.3", 00:17:58.629 "trsvcid": "4420" 00:17:58.629 }, 00:17:58.629 "secure_channel": true 00:17:58.629 } 00:17:58.629 } 00:17:58.629 ] 00:17:58.629 } 00:17:58.629 ] 00:17:58.629 }' 00:17:58.629 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:58.629 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:58.629 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:58.629 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:58.629 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71735 00:17:58.630 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:58.630 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71735 00:17:58.630 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71735 ']' 00:17:58.630 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.630 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:58.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.630 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.630 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:58.630 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:58.630 [2024-12-09 09:28:36.295446] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:17:58.630 [2024-12-09 09:28:36.295532] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:58.889 [2024-12-09 09:28:36.448595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.889 [2024-12-09 09:28:36.515690] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:58.889 [2024-12-09 09:28:36.515763] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:58.889 [2024-12-09 09:28:36.515777] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:58.889 [2024-12-09 09:28:36.515788] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:58.889 [2024-12-09 09:28:36.515797] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:58.889 [2024-12-09 09:28:36.516236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:59.147 [2024-12-09 09:28:36.701348] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:59.147 [2024-12-09 09:28:36.798242] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:59.147 [2024-12-09 09:28:36.830153] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:59.147 [2024-12-09 09:28:36.830546] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:59.715 09:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:59.715 09:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:59.715 09:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:59.715 09:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:59.715 09:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:59.715 09:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:59.715 09:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:59.715 09:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=71767 00:17:59.715 09:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 71767 /var/tmp/bdevperf.sock 00:17:59.715 09:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:17:59.715 "subsystems": [ 00:17:59.715 { 00:17:59.715 "subsystem": "keyring", 00:17:59.715 "config": [ 00:17:59.715 { 00:17:59.715 "method": "keyring_file_add_key", 00:17:59.715 "params": { 00:17:59.715 "name": "key0", 00:17:59.715 "path": "/tmp/tmp.VgAjhtmVhL" 00:17:59.715 } 00:17:59.715 } 00:17:59.715 ] 00:17:59.715 }, 00:17:59.715 { 00:17:59.715 "subsystem": "iobuf", 00:17:59.715 "config": [ 00:17:59.715 { 00:17:59.715 "method": "iobuf_set_options", 00:17:59.715 "params": { 00:17:59.715 "small_pool_count": 8192, 00:17:59.715 "large_pool_count": 1024, 00:17:59.715 "small_bufsize": 8192, 00:17:59.715 "large_bufsize": 135168, 00:17:59.715 "enable_numa": false 00:17:59.715 } 00:17:59.715 } 00:17:59.715 ] 00:17:59.715 }, 00:17:59.715 { 00:17:59.715 "subsystem": "sock", 00:17:59.715 "config": [ 00:17:59.715 { 00:17:59.715 "method": "sock_set_default_impl", 00:17:59.715 "params": { 00:17:59.715 "impl_name": "uring" 00:17:59.715 } 00:17:59.715 }, 00:17:59.715 { 00:17:59.715 "method": "sock_impl_set_options", 00:17:59.715 "params": { 00:17:59.715 "impl_name": "ssl", 00:17:59.715 "recv_buf_size": 4096, 00:17:59.715 "send_buf_size": 4096, 00:17:59.715 "enable_recv_pipe": true, 00:17:59.715 "enable_quickack": false, 00:17:59.715 "enable_placement_id": 0, 00:17:59.715 "enable_zerocopy_send_server": true, 00:17:59.715 "enable_zerocopy_send_client": false, 00:17:59.715 "zerocopy_threshold": 0, 00:17:59.715 "tls_version": 0, 00:17:59.715 "enable_ktls": false 00:17:59.715 } 00:17:59.715 }, 00:17:59.715 { 00:17:59.715 "method": "sock_impl_set_options", 00:17:59.715 "params": { 00:17:59.715 "impl_name": "posix", 00:17:59.715 "recv_buf_size": 2097152, 00:17:59.715 "send_buf_size": 2097152, 00:17:59.715 "enable_recv_pipe": true, 00:17:59.715 "enable_quickack": false, 00:17:59.715 "enable_placement_id": 0, 00:17:59.715 "enable_zerocopy_send_server": true, 00:17:59.715 "enable_zerocopy_send_client": false, 00:17:59.715 "zerocopy_threshold": 0, 00:17:59.715 "tls_version": 0, 00:17:59.715 "enable_ktls": false 00:17:59.715 } 00:17:59.715 }, 00:17:59.715 { 00:17:59.715 "method": "sock_impl_set_options", 00:17:59.715 "params": { 00:17:59.715 "impl_name": "uring", 00:17:59.715 "recv_buf_size": 2097152, 00:17:59.715 "send_buf_size": 2097152, 00:17:59.715 "enable_recv_pipe": true, 00:17:59.715 "enable_quickack": false, 00:17:59.715 "enable_placement_id": 0, 00:17:59.715 "enable_zerocopy_send_server": false, 00:17:59.715 "enable_zerocopy_send_client": false, 00:17:59.715 "zerocopy_threshold": 0, 00:17:59.715 "tls_version": 0, 00:17:59.715 "enable_ktls": false 00:17:59.715 } 00:17:59.715 } 00:17:59.715 ] 00:17:59.715 }, 00:17:59.715 { 00:17:59.715 "subsystem": "vmd", 00:17:59.715 "config": [] 00:17:59.715 }, 00:17:59.715 { 00:17:59.715 "subsystem": "accel", 00:17:59.715 "config": [ 00:17:59.715 { 00:17:59.715 "method": "accel_set_options", 00:17:59.715 "params": { 00:17:59.715 "small_cache_size": 128, 00:17:59.715 "large_cache_size": 16, 00:17:59.715 "task_count": 2048, 00:17:59.715 "sequence_count": 2048, 00:17:59.715 "buf_count": 2048 00:17:59.715 } 00:17:59.715 } 00:17:59.715 ] 00:17:59.715 }, 00:17:59.715 { 00:17:59.715 "subsystem": "bdev", 00:17:59.715 "config": [ 00:17:59.715 { 00:17:59.715 "method": "bdev_set_options", 00:17:59.715 "params": { 00:17:59.715 "bdev_io_pool_size": 65535, 00:17:59.715 "bdev_io_cache_size": 256, 00:17:59.715 "bdev_auto_examine": true, 00:17:59.715 "iobuf_small_cache_size": 128, 00:17:59.715 "iobuf_large_cache_size": 16 00:17:59.715 } 00:17:59.715 }, 00:17:59.715 { 00:17:59.715 "method": "bdev_raid_set_options", 00:17:59.715 "params": { 00:17:59.715 "process_window_size_kb": 1024, 00:17:59.715 "process_max_bandwidth_mb_sec": 0 00:17:59.715 } 00:17:59.715 }, 00:17:59.715 { 00:17:59.715 "method": "bdev_iscsi_set_options", 00:17:59.715 "params": { 00:17:59.715 "timeout_sec": 30 00:17:59.715 } 00:17:59.715 }, 00:17:59.715 { 00:17:59.715 "method": "bdev_nvme_set_options", 00:17:59.715 "params": { 00:17:59.715 "action_on_timeout": "none", 00:17:59.715 "timeout_us": 0, 00:17:59.715 "timeout_admin_us": 0, 00:17:59.715 "keep_alive_timeout_ms": 10000, 00:17:59.715 "arbitration_burst": 0, 00:17:59.715 "low_priority_weight": 0, 00:17:59.715 "medium_priority_weight": 0, 00:17:59.715 "high_priority_weight": 0, 00:17:59.715 "nvme_adminq_poll_period_us": 10000, 00:17:59.715 "nvme_ioq_poll_period_us": 0, 00:17:59.715 "io_queue_requests": 512, 00:17:59.715 "delay_cmd_submit": true, 00:17:59.715 "transport_retry_count": 4, 00:17:59.715 "bdev_retry_count": 3, 00:17:59.715 "transport_ack_timeout": 0, 00:17:59.715 "ctrlr_loss_timeout_sec": 0, 00:17:59.715 "reconnect_delay_sec": 0, 00:17:59.715 "fast_io_fail_timeout_sec": 0, 00:17:59.715 "disable_auto_failback": false, 00:17:59.715 "generate_uuids": false, 00:17:59.715 "transport_tos": 0, 00:17:59.715 "nvme_error_stat": false, 00:17:59.715 "rdma_srq_size": 0, 00:17:59.715 "io_path_stat": false, 00:17:59.715 "allow_accel_sequence": false, 00:17:59.715 "rdma_max_cq_size": 0, 00:17:59.715 "rdma_cm_event_timeout_ms": 0, 00:17:59.715 "dhchap_digests": [ 00:17:59.715 "sha256", 00:17:59.715 "sha384", 00:17:59.715 "sha512" 00:17:59.715 ], 00:17:59.715 "dhchap_dhgroups": [ 00:17:59.715 "null", 00:17:59.715 "ffdhe2048", 00:17:59.715 "ffdhe3072", 00:17:59.715 "ffdhe4096", 00:17:59.715 "ffdhe6144", 00:17:59.715 "ffdhe8192" 00:17:59.715 ] 00:17:59.715 } 00:17:59.715 }, 00:17:59.715 { 00:17:59.715 "method": "bdev_nvme_attach_controller", 00:17:59.715 "params": { 00:17:59.715 "name": "TLSTEST", 00:17:59.715 "trtype": "TCP", 00:17:59.715 "adrfam": "IPv4", 00:17:59.715 "traddr": "10.0.0.3", 00:17:59.715 "trsvcid": "4420", 00:17:59.716 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:59.716 "prchk_reftag": false, 00:17:59.716 "prchk_guard": false, 00:17:59.716 "ctrlr_loss_timeout_sec": 0, 00:17:59.716 "reconnect_delay_sec": 0, 00:17:59.716 "fast_io_fail_timeout_sec": 0, 00:17:59.716 "psk": "key0", 00:17:59.716 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:59.716 "hdgst": false, 00:17:59.716 "ddgst": false, 00:17:59.716 "multipath": "multipath" 00:17:59.716 } 00:17:59.716 }, 00:17:59.716 { 00:17:59.716 "method": "bdev_nvme_set_hotplug", 00:17:59.716 "params": { 00:17:59.716 "period_us": 100000, 00:17:59.716 "enable": false 00:17:59.716 } 00:17:59.716 }, 00:17:59.716 { 00:17:59.716 "method": "bdev_wait_for_examine" 00:17:59.716 } 00:17:59.716 ] 00:17:59.716 }, 00:17:59.716 { 00:17:59.716 "subsystem": "nbd", 00:17:59.716 "config": [] 00:17:59.716 } 00:17:59.716 ] 00:17:59.716 }' 00:17:59.716 09:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71767 ']' 00:17:59.716 09:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:59.716 09:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:59.716 09:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:59.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:59.716 09:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:59.716 09:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:59.716 [2024-12-09 09:28:37.265928] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:17:59.716 [2024-12-09 09:28:37.266012] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71767 ] 00:17:59.716 [2024-12-09 09:28:37.420789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.974 [2024-12-09 09:28:37.471421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:59.974 [2024-12-09 09:28:37.594307] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:59.974 [2024-12-09 09:28:37.637060] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:00.584 09:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:00.584 09:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:00.584 09:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:00.584 Running I/O for 10 seconds... 00:18:02.897 5233.00 IOPS, 20.44 MiB/s [2024-12-09T09:28:41.556Z] 5257.50 IOPS, 20.54 MiB/s [2024-12-09T09:28:42.492Z] 5259.00 IOPS, 20.54 MiB/s [2024-12-09T09:28:43.428Z] 5258.25 IOPS, 20.54 MiB/s [2024-12-09T09:28:44.365Z] 5257.40 IOPS, 20.54 MiB/s [2024-12-09T09:28:45.339Z] 5256.83 IOPS, 20.53 MiB/s [2024-12-09T09:28:46.272Z] 5255.57 IOPS, 20.53 MiB/s [2024-12-09T09:28:47.647Z] 5252.25 IOPS, 20.52 MiB/s [2024-12-09T09:28:48.583Z] 5255.11 IOPS, 20.53 MiB/s [2024-12-09T09:28:48.583Z] 5254.20 IOPS, 20.52 MiB/s 00:18:10.860 Latency(us) 00:18:10.860 [2024-12-09T09:28:48.583Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.860 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:10.860 Verification LBA range: start 0x0 length 0x2000 00:18:10.860 TLSTESTn1 : 10.01 5260.09 20.55 0.00 0.00 24298.05 4421.71 18634.33 00:18:10.860 [2024-12-09T09:28:48.583Z] =================================================================================================================== 00:18:10.860 [2024-12-09T09:28:48.583Z] Total : 5260.09 20.55 0.00 0.00 24298.05 4421.71 18634.33 00:18:10.860 { 00:18:10.860 "results": [ 00:18:10.860 { 00:18:10.860 "job": "TLSTESTn1", 00:18:10.860 "core_mask": "0x4", 00:18:10.860 "workload": "verify", 00:18:10.860 "status": "finished", 00:18:10.860 "verify_range": { 00:18:10.860 "start": 0, 00:18:10.860 "length": 8192 00:18:10.860 }, 00:18:10.860 "queue_depth": 128, 00:18:10.860 "io_size": 4096, 00:18:10.860 "runtime": 10.012944, 00:18:10.860 "iops": 5260.091337772387, 00:18:10.860 "mibps": 20.547231788173388, 00:18:10.860 "io_failed": 0, 00:18:10.860 "io_timeout": 0, 00:18:10.860 "avg_latency_us": 24298.04639779189, 00:18:10.860 "min_latency_us": 4421.706024096386, 00:18:10.860 "max_latency_us": 18634.332530120482 00:18:10.860 } 00:18:10.860 ], 00:18:10.860 "core_count": 1 00:18:10.860 } 00:18:10.860 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:10.860 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 71767 00:18:10.860 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71767 ']' 00:18:10.860 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71767 00:18:10.860 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:10.860 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:10.860 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71767 00:18:10.860 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:10.860 killing process with pid 71767 00:18:10.860 Received shutdown signal, test time was about 10.000000 seconds 00:18:10.860 00:18:10.860 Latency(us) 00:18:10.860 [2024-12-09T09:28:48.583Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.860 [2024-12-09T09:28:48.583Z] =================================================================================================================== 00:18:10.860 [2024-12-09T09:28:48.583Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:10.860 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:10.860 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71767' 00:18:10.860 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71767 00:18:10.860 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71767 00:18:10.860 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 71735 00:18:10.860 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71735 ']' 00:18:10.860 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71735 00:18:10.860 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:10.860 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:10.860 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71735 00:18:10.860 killing process with pid 71735 00:18:10.860 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:10.860 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:10.860 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71735' 00:18:10.860 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71735 00:18:10.860 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71735 00:18:11.120 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:18:11.120 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:11.120 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:11.120 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:11.120 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71900 00:18:11.120 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:11.120 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71900 00:18:11.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.120 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71900 ']' 00:18:11.120 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.120 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:11.120 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.120 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:11.120 09:28:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:11.378 [2024-12-09 09:28:48.873832] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:18:11.378 [2024-12-09 09:28:48.873904] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:11.378 [2024-12-09 09:28:49.027753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.378 [2024-12-09 09:28:49.077583] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:11.379 [2024-12-09 09:28:49.077634] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:11.379 [2024-12-09 09:28:49.077644] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:11.379 [2024-12-09 09:28:49.077653] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:11.379 [2024-12-09 09:28:49.077660] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:11.379 [2024-12-09 09:28:49.077923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:11.637 [2024-12-09 09:28:49.123996] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:12.205 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:12.205 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:12.205 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:12.206 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:12.206 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:12.206 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:12.206 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.VgAjhtmVhL 00:18:12.206 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.VgAjhtmVhL 00:18:12.206 09:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:12.464 [2024-12-09 09:28:50.047263] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:12.464 09:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:12.722 09:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:18:12.980 [2024-12-09 09:28:50.506609] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:12.980 [2024-12-09 09:28:50.506831] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:12.980 09:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:13.238 malloc0 00:18:13.238 09:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:13.238 09:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.VgAjhtmVhL 00:18:13.497 09:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:13.756 09:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:13.756 09:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=71960 00:18:13.756 09:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:13.756 09:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 71960 /var/tmp/bdevperf.sock 00:18:13.756 09:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71960 ']' 00:18:13.756 09:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:13.756 09:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:13.756 09:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:13.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:13.756 09:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:13.756 09:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:14.017 [2024-12-09 09:28:51.484145] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:18:14.017 [2024-12-09 09:28:51.484241] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71960 ] 00:18:14.017 [2024-12-09 09:28:51.639226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.017 [2024-12-09 09:28:51.709991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:14.274 [2024-12-09 09:28:51.785304] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:14.840 09:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:14.840 09:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:14.840 09:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.VgAjhtmVhL 00:18:15.098 09:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:15.098 [2024-12-09 09:28:52.744689] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:15.098 nvme0n1 00:18:15.356 09:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:15.356 Running I/O for 1 seconds... 00:18:16.289 5145.00 IOPS, 20.10 MiB/s 00:18:16.289 Latency(us) 00:18:16.289 [2024-12-09T09:28:54.012Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:16.289 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:16.289 Verification LBA range: start 0x0 length 0x2000 00:18:16.289 nvme0n1 : 1.01 5202.11 20.32 0.00 0.00 24414.20 4948.10 20634.63 00:18:16.289 [2024-12-09T09:28:54.012Z] =================================================================================================================== 00:18:16.289 [2024-12-09T09:28:54.012Z] Total : 5202.11 20.32 0.00 0.00 24414.20 4948.10 20634.63 00:18:16.289 { 00:18:16.289 "results": [ 00:18:16.289 { 00:18:16.289 "job": "nvme0n1", 00:18:16.289 "core_mask": "0x2", 00:18:16.289 "workload": "verify", 00:18:16.289 "status": "finished", 00:18:16.289 "verify_range": { 00:18:16.289 "start": 0, 00:18:16.289 "length": 8192 00:18:16.289 }, 00:18:16.289 "queue_depth": 128, 00:18:16.289 "io_size": 4096, 00:18:16.289 "runtime": 1.013819, 00:18:16.289 "iops": 5202.112014077464, 00:18:16.289 "mibps": 20.320750054990093, 00:18:16.289 "io_failed": 0, 00:18:16.289 "io_timeout": 0, 00:18:16.289 "avg_latency_us": 24414.19935091142, 00:18:16.289 "min_latency_us": 4948.0995983935745, 00:18:16.289 "max_latency_us": 20634.6281124498 00:18:16.289 } 00:18:16.289 ], 00:18:16.289 "core_count": 1 00:18:16.289 } 00:18:16.289 09:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 71960 00:18:16.289 09:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71960 ']' 00:18:16.289 09:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71960 00:18:16.289 09:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:16.289 09:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:16.290 09:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71960 00:18:16.547 killing process with pid 71960 00:18:16.547 Received shutdown signal, test time was about 1.000000 seconds 00:18:16.547 00:18:16.547 Latency(us) 00:18:16.547 [2024-12-09T09:28:54.270Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:16.547 [2024-12-09T09:28:54.270Z] =================================================================================================================== 00:18:16.547 [2024-12-09T09:28:54.270Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:16.547 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:16.547 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:16.547 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71960' 00:18:16.547 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71960 00:18:16.547 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71960 00:18:16.806 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 71900 00:18:16.806 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71900 ']' 00:18:16.806 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71900 00:18:16.806 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:16.806 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:16.806 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71900 00:18:16.806 killing process with pid 71900 00:18:16.806 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:16.806 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:16.806 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71900' 00:18:16.806 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71900 00:18:16.806 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71900 00:18:16.806 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:18:16.806 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:16.806 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:16.806 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:16.806 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72007 00:18:16.806 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:16.806 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72007 00:18:16.806 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72007 ']' 00:18:16.806 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.806 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:16.806 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:16.806 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:16.806 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:17.064 [2024-12-09 09:28:54.551258] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:18:17.064 [2024-12-09 09:28:54.551554] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:17.064 [2024-12-09 09:28:54.704418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.064 [2024-12-09 09:28:54.755965] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:17.064 [2024-12-09 09:28:54.756012] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:17.064 [2024-12-09 09:28:54.756022] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:17.064 [2024-12-09 09:28:54.756030] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:17.064 [2024-12-09 09:28:54.756037] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:17.064 [2024-12-09 09:28:54.756319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.321 [2024-12-09 09:28:54.797871] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:17.886 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:17.886 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:17.886 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:17.886 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:17.886 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:17.886 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:17.886 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:18:17.886 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.886 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:17.886 [2024-12-09 09:28:55.526826] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:17.886 malloc0 00:18:17.886 [2024-12-09 09:28:55.559637] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:17.886 [2024-12-09 09:28:55.559839] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:17.886 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.886 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=72039 00:18:17.886 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:17.886 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 72039 /var/tmp/bdevperf.sock 00:18:17.886 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72039 ']' 00:18:17.886 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:17.886 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:17.886 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:17.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:17.886 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:17.886 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:18.143 [2024-12-09 09:28:55.645924] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:18:18.143 [2024-12-09 09:28:55.646219] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72039 ] 00:18:18.143 [2024-12-09 09:28:55.796993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.401 [2024-12-09 09:28:55.865979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:18.401 [2024-12-09 09:28:55.940334] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:18.968 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:18.968 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:18.968 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.VgAjhtmVhL 00:18:19.228 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:19.488 [2024-12-09 09:28:56.953186] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:19.488 nvme0n1 00:18:19.488 09:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:19.488 Running I/O for 1 seconds... 00:18:20.862 5231.00 IOPS, 20.43 MiB/s 00:18:20.862 Latency(us) 00:18:20.862 [2024-12-09T09:28:58.585Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.862 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:20.862 Verification LBA range: start 0x0 length 0x2000 00:18:20.862 nvme0n1 : 1.02 5242.72 20.48 0.00 0.00 24273.38 5737.69 22213.81 00:18:20.862 [2024-12-09T09:28:58.585Z] =================================================================================================================== 00:18:20.862 [2024-12-09T09:28:58.585Z] Total : 5242.72 20.48 0.00 0.00 24273.38 5737.69 22213.81 00:18:20.862 { 00:18:20.862 "results": [ 00:18:20.862 { 00:18:20.862 "job": "nvme0n1", 00:18:20.862 "core_mask": "0x2", 00:18:20.862 "workload": "verify", 00:18:20.862 "status": "finished", 00:18:20.862 "verify_range": { 00:18:20.862 "start": 0, 00:18:20.862 "length": 8192 00:18:20.862 }, 00:18:20.862 "queue_depth": 128, 00:18:20.862 "io_size": 4096, 00:18:20.862 "runtime": 1.02237, 00:18:20.862 "iops": 5242.720345863044, 00:18:20.862 "mibps": 20.479376351027515, 00:18:20.862 "io_failed": 0, 00:18:20.862 "io_timeout": 0, 00:18:20.862 "avg_latency_us": 24273.382619432956, 00:18:20.862 "min_latency_us": 5737.6899598393575, 00:18:20.862 "max_latency_us": 22213.808835341366 00:18:20.862 } 00:18:20.862 ], 00:18:20.862 "core_count": 1 00:18:20.862 } 00:18:20.862 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:18:20.862 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.862 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:20.862 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.862 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:18:20.862 "subsystems": [ 00:18:20.862 { 00:18:20.862 "subsystem": "keyring", 00:18:20.862 "config": [ 00:18:20.862 { 00:18:20.862 "method": "keyring_file_add_key", 00:18:20.862 "params": { 00:18:20.862 "name": "key0", 00:18:20.862 "path": "/tmp/tmp.VgAjhtmVhL" 00:18:20.862 } 00:18:20.862 } 00:18:20.862 ] 00:18:20.862 }, 00:18:20.862 { 00:18:20.862 "subsystem": "iobuf", 00:18:20.862 "config": [ 00:18:20.862 { 00:18:20.862 "method": "iobuf_set_options", 00:18:20.862 "params": { 00:18:20.862 "small_pool_count": 8192, 00:18:20.862 "large_pool_count": 1024, 00:18:20.862 "small_bufsize": 8192, 00:18:20.862 "large_bufsize": 135168, 00:18:20.862 "enable_numa": false 00:18:20.862 } 00:18:20.862 } 00:18:20.862 ] 00:18:20.862 }, 00:18:20.862 { 00:18:20.862 "subsystem": "sock", 00:18:20.862 "config": [ 00:18:20.862 { 00:18:20.862 "method": "sock_set_default_impl", 00:18:20.862 "params": { 00:18:20.862 "impl_name": "uring" 00:18:20.862 } 00:18:20.862 }, 00:18:20.862 { 00:18:20.862 "method": "sock_impl_set_options", 00:18:20.862 "params": { 00:18:20.862 "impl_name": "ssl", 00:18:20.862 "recv_buf_size": 4096, 00:18:20.862 "send_buf_size": 4096, 00:18:20.862 "enable_recv_pipe": true, 00:18:20.862 "enable_quickack": false, 00:18:20.863 "enable_placement_id": 0, 00:18:20.863 "enable_zerocopy_send_server": true, 00:18:20.863 "enable_zerocopy_send_client": false, 00:18:20.863 "zerocopy_threshold": 0, 00:18:20.863 "tls_version": 0, 00:18:20.863 "enable_ktls": false 00:18:20.863 } 00:18:20.863 }, 00:18:20.863 { 00:18:20.863 "method": "sock_impl_set_options", 00:18:20.863 "params": { 00:18:20.863 "impl_name": "posix", 00:18:20.863 "recv_buf_size": 2097152, 00:18:20.863 "send_buf_size": 2097152, 00:18:20.863 "enable_recv_pipe": true, 00:18:20.863 "enable_quickack": false, 00:18:20.863 "enable_placement_id": 0, 00:18:20.863 "enable_zerocopy_send_server": true, 00:18:20.863 "enable_zerocopy_send_client": false, 00:18:20.863 "zerocopy_threshold": 0, 00:18:20.863 "tls_version": 0, 00:18:20.863 "enable_ktls": false 00:18:20.863 } 00:18:20.863 }, 00:18:20.863 { 00:18:20.863 "method": "sock_impl_set_options", 00:18:20.863 "params": { 00:18:20.863 "impl_name": "uring", 00:18:20.863 "recv_buf_size": 2097152, 00:18:20.863 "send_buf_size": 2097152, 00:18:20.863 "enable_recv_pipe": true, 00:18:20.863 "enable_quickack": false, 00:18:20.863 "enable_placement_id": 0, 00:18:20.863 "enable_zerocopy_send_server": false, 00:18:20.863 "enable_zerocopy_send_client": false, 00:18:20.863 "zerocopy_threshold": 0, 00:18:20.863 "tls_version": 0, 00:18:20.863 "enable_ktls": false 00:18:20.863 } 00:18:20.863 } 00:18:20.863 ] 00:18:20.863 }, 00:18:20.863 { 00:18:20.863 "subsystem": "vmd", 00:18:20.863 "config": [] 00:18:20.863 }, 00:18:20.863 { 00:18:20.863 "subsystem": "accel", 00:18:20.863 "config": [ 00:18:20.863 { 00:18:20.863 "method": "accel_set_options", 00:18:20.863 "params": { 00:18:20.863 "small_cache_size": 128, 00:18:20.863 "large_cache_size": 16, 00:18:20.863 "task_count": 2048, 00:18:20.863 "sequence_count": 2048, 00:18:20.863 "buf_count": 2048 00:18:20.863 } 00:18:20.863 } 00:18:20.863 ] 00:18:20.863 }, 00:18:20.863 { 00:18:20.863 "subsystem": "bdev", 00:18:20.863 "config": [ 00:18:20.863 { 00:18:20.863 "method": "bdev_set_options", 00:18:20.863 "params": { 00:18:20.863 "bdev_io_pool_size": 65535, 00:18:20.863 "bdev_io_cache_size": 256, 00:18:20.863 "bdev_auto_examine": true, 00:18:20.863 "iobuf_small_cache_size": 128, 00:18:20.863 "iobuf_large_cache_size": 16 00:18:20.863 } 00:18:20.863 }, 00:18:20.863 { 00:18:20.863 "method": "bdev_raid_set_options", 00:18:20.863 "params": { 00:18:20.863 "process_window_size_kb": 1024, 00:18:20.863 "process_max_bandwidth_mb_sec": 0 00:18:20.863 } 00:18:20.863 }, 00:18:20.863 { 00:18:20.863 "method": "bdev_iscsi_set_options", 00:18:20.863 "params": { 00:18:20.863 "timeout_sec": 30 00:18:20.863 } 00:18:20.863 }, 00:18:20.863 { 00:18:20.863 "method": "bdev_nvme_set_options", 00:18:20.863 "params": { 00:18:20.863 "action_on_timeout": "none", 00:18:20.863 "timeout_us": 0, 00:18:20.863 "timeout_admin_us": 0, 00:18:20.863 "keep_alive_timeout_ms": 10000, 00:18:20.863 "arbitration_burst": 0, 00:18:20.863 "low_priority_weight": 0, 00:18:20.863 "medium_priority_weight": 0, 00:18:20.863 "high_priority_weight": 0, 00:18:20.863 "nvme_adminq_poll_period_us": 10000, 00:18:20.863 "nvme_ioq_poll_period_us": 0, 00:18:20.863 "io_queue_requests": 0, 00:18:20.863 "delay_cmd_submit": true, 00:18:20.863 "transport_retry_count": 4, 00:18:20.863 "bdev_retry_count": 3, 00:18:20.863 "transport_ack_timeout": 0, 00:18:20.863 "ctrlr_loss_timeout_sec": 0, 00:18:20.863 "reconnect_delay_sec": 0, 00:18:20.863 "fast_io_fail_timeout_sec": 0, 00:18:20.863 "disable_auto_failback": false, 00:18:20.863 "generate_uuids": false, 00:18:20.863 "transport_tos": 0, 00:18:20.863 "nvme_error_stat": false, 00:18:20.863 "rdma_srq_size": 0, 00:18:20.863 "io_path_stat": false, 00:18:20.863 "allow_accel_sequence": false, 00:18:20.863 "rdma_max_cq_size": 0, 00:18:20.863 "rdma_cm_event_timeout_ms": 0, 00:18:20.863 "dhchap_digests": [ 00:18:20.863 "sha256", 00:18:20.863 "sha384", 00:18:20.863 "sha512" 00:18:20.863 ], 00:18:20.863 "dhchap_dhgroups": [ 00:18:20.863 "null", 00:18:20.863 "ffdhe2048", 00:18:20.863 "ffdhe3072", 00:18:20.863 "ffdhe4096", 00:18:20.863 "ffdhe6144", 00:18:20.863 "ffdhe8192" 00:18:20.863 ] 00:18:20.863 } 00:18:20.863 }, 00:18:20.863 { 00:18:20.863 "method": "bdev_nvme_set_hotplug", 00:18:20.863 "params": { 00:18:20.863 "period_us": 100000, 00:18:20.863 "enable": false 00:18:20.863 } 00:18:20.863 }, 00:18:20.863 { 00:18:20.863 "method": "bdev_malloc_create", 00:18:20.863 "params": { 00:18:20.863 "name": "malloc0", 00:18:20.863 "num_blocks": 8192, 00:18:20.863 "block_size": 4096, 00:18:20.863 "physical_block_size": 4096, 00:18:20.863 "uuid": "7ca9a65a-1e72-4cb9-a56c-78fd2eba00c9", 00:18:20.863 "optimal_io_boundary": 0, 00:18:20.863 "md_size": 0, 00:18:20.863 "dif_type": 0, 00:18:20.863 "dif_is_head_of_md": false, 00:18:20.863 "dif_pi_format": 0 00:18:20.863 } 00:18:20.863 }, 00:18:20.863 { 00:18:20.863 "method": "bdev_wait_for_examine" 00:18:20.863 } 00:18:20.863 ] 00:18:20.863 }, 00:18:20.863 { 00:18:20.863 "subsystem": "nbd", 00:18:20.863 "config": [] 00:18:20.863 }, 00:18:20.863 { 00:18:20.863 "subsystem": "scheduler", 00:18:20.863 "config": [ 00:18:20.863 { 00:18:20.863 "method": "framework_set_scheduler", 00:18:20.863 "params": { 00:18:20.863 "name": "static" 00:18:20.863 } 00:18:20.863 } 00:18:20.863 ] 00:18:20.863 }, 00:18:20.863 { 00:18:20.863 "subsystem": "nvmf", 00:18:20.863 "config": [ 00:18:20.863 { 00:18:20.863 "method": "nvmf_set_config", 00:18:20.863 "params": { 00:18:20.863 "discovery_filter": "match_any", 00:18:20.863 "admin_cmd_passthru": { 00:18:20.863 "identify_ctrlr": false 00:18:20.863 }, 00:18:20.863 "dhchap_digests": [ 00:18:20.863 "sha256", 00:18:20.863 "sha384", 00:18:20.863 "sha512" 00:18:20.863 ], 00:18:20.863 "dhchap_dhgroups": [ 00:18:20.863 "null", 00:18:20.863 "ffdhe2048", 00:18:20.863 "ffdhe3072", 00:18:20.863 "ffdhe4096", 00:18:20.863 "ffdhe6144", 00:18:20.863 "ffdhe8192" 00:18:20.863 ] 00:18:20.863 } 00:18:20.863 }, 00:18:20.863 { 00:18:20.863 "method": "nvmf_set_max_subsystems", 00:18:20.863 "params": { 00:18:20.863 "max_subsystems": 1024 00:18:20.863 } 00:18:20.863 }, 00:18:20.863 { 00:18:20.863 "method": "nvmf_set_crdt", 00:18:20.863 "params": { 00:18:20.863 "crdt1": 0, 00:18:20.863 "crdt2": 0, 00:18:20.863 "crdt3": 0 00:18:20.863 } 00:18:20.863 }, 00:18:20.863 { 00:18:20.863 "method": "nvmf_create_transport", 00:18:20.863 "params": { 00:18:20.863 "trtype": "TCP", 00:18:20.863 "max_queue_depth": 128, 00:18:20.863 "max_io_qpairs_per_ctrlr": 127, 00:18:20.863 "in_capsule_data_size": 4096, 00:18:20.863 "max_io_size": 131072, 00:18:20.863 "io_unit_size": 131072, 00:18:20.863 "max_aq_depth": 128, 00:18:20.863 "num_shared_buffers": 511, 00:18:20.863 "buf_cache_size": 4294967295, 00:18:20.863 "dif_insert_or_strip": false, 00:18:20.863 "zcopy": false, 00:18:20.863 "c2h_success": false, 00:18:20.863 "sock_priority": 0, 00:18:20.863 "abort_timeout_sec": 1, 00:18:20.863 "ack_timeout": 0, 00:18:20.863 "data_wr_pool_size": 0 00:18:20.863 } 00:18:20.863 }, 00:18:20.863 { 00:18:20.863 "method": "nvmf_create_subsystem", 00:18:20.863 "params": { 00:18:20.863 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:20.863 "allow_any_host": false, 00:18:20.863 "serial_number": "00000000000000000000", 00:18:20.863 "model_number": "SPDK bdev Controller", 00:18:20.863 "max_namespaces": 32, 00:18:20.863 "min_cntlid": 1, 00:18:20.863 "max_cntlid": 65519, 00:18:20.863 "ana_reporting": false 00:18:20.863 } 00:18:20.863 }, 00:18:20.863 { 00:18:20.863 "method": "nvmf_subsystem_add_host", 00:18:20.863 "params": { 00:18:20.863 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:20.863 "host": "nqn.2016-06.io.spdk:host1", 00:18:20.863 "psk": "key0" 00:18:20.863 } 00:18:20.863 }, 00:18:20.863 { 00:18:20.863 "method": "nvmf_subsystem_add_ns", 00:18:20.863 "params": { 00:18:20.863 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:20.863 "namespace": { 00:18:20.863 "nsid": 1, 00:18:20.863 "bdev_name": "malloc0", 00:18:20.863 "nguid": "7CA9A65A1E724CB9A56C78FD2EBA00C9", 00:18:20.863 "uuid": "7ca9a65a-1e72-4cb9-a56c-78fd2eba00c9", 00:18:20.863 "no_auto_visible": false 00:18:20.863 } 00:18:20.863 } 00:18:20.863 }, 00:18:20.863 { 00:18:20.863 "method": "nvmf_subsystem_add_listener", 00:18:20.863 "params": { 00:18:20.863 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:20.863 "listen_address": { 00:18:20.863 "trtype": "TCP", 00:18:20.863 "adrfam": "IPv4", 00:18:20.863 "traddr": "10.0.0.3", 00:18:20.863 "trsvcid": "4420" 00:18:20.863 }, 00:18:20.864 "secure_channel": false, 00:18:20.864 "sock_impl": "ssl" 00:18:20.864 } 00:18:20.864 } 00:18:20.864 ] 00:18:20.864 } 00:18:20.864 ] 00:18:20.864 }' 00:18:20.864 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:21.122 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:18:21.122 "subsystems": [ 00:18:21.122 { 00:18:21.122 "subsystem": "keyring", 00:18:21.122 "config": [ 00:18:21.122 { 00:18:21.122 "method": "keyring_file_add_key", 00:18:21.122 "params": { 00:18:21.122 "name": "key0", 00:18:21.122 "path": "/tmp/tmp.VgAjhtmVhL" 00:18:21.122 } 00:18:21.122 } 00:18:21.122 ] 00:18:21.122 }, 00:18:21.122 { 00:18:21.122 "subsystem": "iobuf", 00:18:21.122 "config": [ 00:18:21.122 { 00:18:21.122 "method": "iobuf_set_options", 00:18:21.122 "params": { 00:18:21.122 "small_pool_count": 8192, 00:18:21.122 "large_pool_count": 1024, 00:18:21.122 "small_bufsize": 8192, 00:18:21.122 "large_bufsize": 135168, 00:18:21.122 "enable_numa": false 00:18:21.122 } 00:18:21.122 } 00:18:21.122 ] 00:18:21.122 }, 00:18:21.122 { 00:18:21.122 "subsystem": "sock", 00:18:21.122 "config": [ 00:18:21.122 { 00:18:21.122 "method": "sock_set_default_impl", 00:18:21.122 "params": { 00:18:21.122 "impl_name": "uring" 00:18:21.122 } 00:18:21.122 }, 00:18:21.122 { 00:18:21.123 "method": "sock_impl_set_options", 00:18:21.123 "params": { 00:18:21.123 "impl_name": "ssl", 00:18:21.123 "recv_buf_size": 4096, 00:18:21.123 "send_buf_size": 4096, 00:18:21.123 "enable_recv_pipe": true, 00:18:21.123 "enable_quickack": false, 00:18:21.123 "enable_placement_id": 0, 00:18:21.123 "enable_zerocopy_send_server": true, 00:18:21.123 "enable_zerocopy_send_client": false, 00:18:21.123 "zerocopy_threshold": 0, 00:18:21.123 "tls_version": 0, 00:18:21.123 "enable_ktls": false 00:18:21.123 } 00:18:21.123 }, 00:18:21.123 { 00:18:21.123 "method": "sock_impl_set_options", 00:18:21.123 "params": { 00:18:21.123 "impl_name": "posix", 00:18:21.123 "recv_buf_size": 2097152, 00:18:21.123 "send_buf_size": 2097152, 00:18:21.123 "enable_recv_pipe": true, 00:18:21.123 "enable_quickack": false, 00:18:21.123 "enable_placement_id": 0, 00:18:21.123 "enable_zerocopy_send_server": true, 00:18:21.123 "enable_zerocopy_send_client": false, 00:18:21.123 "zerocopy_threshold": 0, 00:18:21.123 "tls_version": 0, 00:18:21.123 "enable_ktls": false 00:18:21.123 } 00:18:21.123 }, 00:18:21.123 { 00:18:21.123 "method": "sock_impl_set_options", 00:18:21.123 "params": { 00:18:21.123 "impl_name": "uring", 00:18:21.123 "recv_buf_size": 2097152, 00:18:21.123 "send_buf_size": 2097152, 00:18:21.123 "enable_recv_pipe": true, 00:18:21.123 "enable_quickack": false, 00:18:21.123 "enable_placement_id": 0, 00:18:21.123 "enable_zerocopy_send_server": false, 00:18:21.123 "enable_zerocopy_send_client": false, 00:18:21.123 "zerocopy_threshold": 0, 00:18:21.123 "tls_version": 0, 00:18:21.123 "enable_ktls": false 00:18:21.123 } 00:18:21.123 } 00:18:21.123 ] 00:18:21.123 }, 00:18:21.123 { 00:18:21.123 "subsystem": "vmd", 00:18:21.123 "config": [] 00:18:21.123 }, 00:18:21.123 { 00:18:21.123 "subsystem": "accel", 00:18:21.123 "config": [ 00:18:21.123 { 00:18:21.123 "method": "accel_set_options", 00:18:21.123 "params": { 00:18:21.123 "small_cache_size": 128, 00:18:21.123 "large_cache_size": 16, 00:18:21.123 "task_count": 2048, 00:18:21.123 "sequence_count": 2048, 00:18:21.123 "buf_count": 2048 00:18:21.123 } 00:18:21.123 } 00:18:21.123 ] 00:18:21.123 }, 00:18:21.123 { 00:18:21.123 "subsystem": "bdev", 00:18:21.123 "config": [ 00:18:21.123 { 00:18:21.123 "method": "bdev_set_options", 00:18:21.123 "params": { 00:18:21.123 "bdev_io_pool_size": 65535, 00:18:21.123 "bdev_io_cache_size": 256, 00:18:21.123 "bdev_auto_examine": true, 00:18:21.123 "iobuf_small_cache_size": 128, 00:18:21.123 "iobuf_large_cache_size": 16 00:18:21.123 } 00:18:21.123 }, 00:18:21.123 { 00:18:21.123 "method": "bdev_raid_set_options", 00:18:21.123 "params": { 00:18:21.123 "process_window_size_kb": 1024, 00:18:21.123 "process_max_bandwidth_mb_sec": 0 00:18:21.123 } 00:18:21.123 }, 00:18:21.123 { 00:18:21.123 "method": "bdev_iscsi_set_options", 00:18:21.123 "params": { 00:18:21.123 "timeout_sec": 30 00:18:21.123 } 00:18:21.123 }, 00:18:21.123 { 00:18:21.123 "method": "bdev_nvme_set_options", 00:18:21.123 "params": { 00:18:21.123 "action_on_timeout": "none", 00:18:21.123 "timeout_us": 0, 00:18:21.123 "timeout_admin_us": 0, 00:18:21.123 "keep_alive_timeout_ms": 10000, 00:18:21.123 "arbitration_burst": 0, 00:18:21.123 "low_priority_weight": 0, 00:18:21.123 "medium_priority_weight": 0, 00:18:21.123 "high_priority_weight": 0, 00:18:21.123 "nvme_adminq_poll_period_us": 10000, 00:18:21.123 "nvme_ioq_poll_period_us": 0, 00:18:21.123 "io_queue_requests": 512, 00:18:21.123 "delay_cmd_submit": true, 00:18:21.123 "transport_retry_count": 4, 00:18:21.123 "bdev_retry_count": 3, 00:18:21.123 "transport_ack_timeout": 0, 00:18:21.123 "ctrlr_loss_timeout_sec": 0, 00:18:21.123 "reconnect_delay_sec": 0, 00:18:21.123 "fast_io_fail_timeout_sec": 0, 00:18:21.123 "disable_auto_failback": false, 00:18:21.123 "generate_uuids": false, 00:18:21.123 "transport_tos": 0, 00:18:21.123 "nvme_error_stat": false, 00:18:21.123 "rdma_srq_size": 0, 00:18:21.123 "io_path_stat": false, 00:18:21.123 "allow_accel_sequence": false, 00:18:21.123 "rdma_max_cq_size": 0, 00:18:21.123 "rdma_cm_event_timeout_ms": 0, 00:18:21.123 "dhchap_digests": [ 00:18:21.123 "sha256", 00:18:21.123 "sha384", 00:18:21.123 "sha512" 00:18:21.123 ], 00:18:21.123 "dhchap_dhgroups": [ 00:18:21.123 "null", 00:18:21.123 "ffdhe2048", 00:18:21.123 "ffdhe3072", 00:18:21.123 "ffdhe4096", 00:18:21.123 "ffdhe6144", 00:18:21.123 "ffdhe8192" 00:18:21.123 ] 00:18:21.123 } 00:18:21.123 }, 00:18:21.123 { 00:18:21.123 "method": "bdev_nvme_attach_controller", 00:18:21.123 "params": { 00:18:21.123 "name": "nvme0", 00:18:21.123 "trtype": "TCP", 00:18:21.123 "adrfam": "IPv4", 00:18:21.123 "traddr": "10.0.0.3", 00:18:21.123 "trsvcid": "4420", 00:18:21.123 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:21.123 "prchk_reftag": false, 00:18:21.123 "prchk_guard": false, 00:18:21.123 "ctrlr_loss_timeout_sec": 0, 00:18:21.123 "reconnect_delay_sec": 0, 00:18:21.123 "fast_io_fail_timeout_sec": 0, 00:18:21.123 "psk": "key0", 00:18:21.123 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:21.123 "hdgst": false, 00:18:21.123 "ddgst": false, 00:18:21.123 "multipath": "multipath" 00:18:21.123 } 00:18:21.123 }, 00:18:21.123 { 00:18:21.123 "method": "bdev_nvme_set_hotplug", 00:18:21.123 "params": { 00:18:21.123 "period_us": 100000, 00:18:21.123 "enable": false 00:18:21.123 } 00:18:21.123 }, 00:18:21.123 { 00:18:21.123 "method": "bdev_enable_histogram", 00:18:21.123 "params": { 00:18:21.123 "name": "nvme0n1", 00:18:21.123 "enable": true 00:18:21.123 } 00:18:21.123 }, 00:18:21.123 { 00:18:21.124 "method": "bdev_wait_for_examine" 00:18:21.124 } 00:18:21.124 ] 00:18:21.124 }, 00:18:21.124 { 00:18:21.124 "subsystem": "nbd", 00:18:21.124 "config": [] 00:18:21.124 } 00:18:21.124 ] 00:18:21.124 }' 00:18:21.124 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 72039 00:18:21.124 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72039 ']' 00:18:21.124 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72039 00:18:21.124 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:21.124 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:21.124 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72039 00:18:21.124 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:21.124 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:21.124 killing process with pid 72039 00:18:21.124 Received shutdown signal, test time was about 1.000000 seconds 00:18:21.124 00:18:21.124 Latency(us) 00:18:21.124 [2024-12-09T09:28:58.847Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.124 [2024-12-09T09:28:58.847Z] =================================================================================================================== 00:18:21.124 [2024-12-09T09:28:58.847Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:21.124 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72039' 00:18:21.124 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72039 00:18:21.124 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72039 00:18:21.383 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 72007 00:18:21.383 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72007 ']' 00:18:21.383 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72007 00:18:21.383 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:21.383 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:21.383 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72007 00:18:21.383 killing process with pid 72007 00:18:21.383 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:21.383 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:21.383 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72007' 00:18:21.383 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72007 00:18:21.383 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72007 00:18:21.383 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:18:21.383 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:18:21.383 "subsystems": [ 00:18:21.383 { 00:18:21.383 "subsystem": "keyring", 00:18:21.383 "config": [ 00:18:21.383 { 00:18:21.383 "method": "keyring_file_add_key", 00:18:21.383 "params": { 00:18:21.383 "name": "key0", 00:18:21.383 "path": "/tmp/tmp.VgAjhtmVhL" 00:18:21.383 } 00:18:21.383 } 00:18:21.383 ] 00:18:21.383 }, 00:18:21.383 { 00:18:21.383 "subsystem": "iobuf", 00:18:21.383 "config": [ 00:18:21.383 { 00:18:21.383 "method": "iobuf_set_options", 00:18:21.383 "params": { 00:18:21.383 "small_pool_count": 8192, 00:18:21.383 "large_pool_count": 1024, 00:18:21.383 "small_bufsize": 8192, 00:18:21.383 "large_bufsize": 135168, 00:18:21.383 "enable_numa": false 00:18:21.383 } 00:18:21.383 } 00:18:21.383 ] 00:18:21.383 }, 00:18:21.383 { 00:18:21.383 "subsystem": "sock", 00:18:21.383 "config": [ 00:18:21.384 { 00:18:21.384 "method": "sock_set_default_impl", 00:18:21.384 "params": { 00:18:21.384 "impl_name": "uring" 00:18:21.384 } 00:18:21.384 }, 00:18:21.384 { 00:18:21.384 "method": "sock_impl_set_options", 00:18:21.384 "params": { 00:18:21.384 "impl_name": "ssl", 00:18:21.384 "recv_buf_size": 4096, 00:18:21.384 "send_buf_size": 4096, 00:18:21.384 "enable_recv_pipe": true, 00:18:21.384 "enable_quickack": false, 00:18:21.384 "enable_placement_id": 0, 00:18:21.384 "enable_zerocopy_send_server": true, 00:18:21.384 "enable_zerocopy_send_client": false, 00:18:21.384 "zerocopy_threshold": 0, 00:18:21.384 "tls_version": 0, 00:18:21.384 "enable_ktls": false 00:18:21.384 } 00:18:21.384 }, 00:18:21.384 { 00:18:21.384 "method": "sock_impl_set_options", 00:18:21.384 "params": { 00:18:21.384 "impl_name": "posix", 00:18:21.384 "recv_buf_size": 2097152, 00:18:21.384 "send_buf_size": 2097152, 00:18:21.384 "enable_recv_pipe": true, 00:18:21.384 "enable_quickack": false, 00:18:21.384 "enable_placement_id": 0, 00:18:21.384 "enable_zerocopy_send_server": true, 00:18:21.384 "enable_zerocopy_send_client": false, 00:18:21.384 "zerocopy_threshold": 0, 00:18:21.384 "tls_version": 0, 00:18:21.384 "enable_ktls": false 00:18:21.384 } 00:18:21.384 }, 00:18:21.384 { 00:18:21.384 "method": "sock_impl_set_options", 00:18:21.384 "params": { 00:18:21.384 "impl_name": "uring", 00:18:21.384 "recv_buf_size": 2097152, 00:18:21.384 "send_buf_size": 2097152, 00:18:21.384 "enable_recv_pipe": true, 00:18:21.384 "enable_quickack": false, 00:18:21.384 "enable_placement_id": 0, 00:18:21.384 "enable_zerocopy_send_server": false, 00:18:21.384 "enable_zerocopy_send_client": false, 00:18:21.384 "zerocopy_threshold": 0, 00:18:21.384 "tls_version": 0, 00:18:21.384 "enable_ktls": false 00:18:21.384 } 00:18:21.384 } 00:18:21.384 ] 00:18:21.384 }, 00:18:21.384 { 00:18:21.384 "subsystem": "vmd", 00:18:21.384 "config": [] 00:18:21.384 }, 00:18:21.384 { 00:18:21.384 "subsystem": "accel", 00:18:21.384 "config": [ 00:18:21.384 { 00:18:21.384 "method": "accel_set_options", 00:18:21.384 "params": { 00:18:21.384 "small_cache_size": 128, 00:18:21.384 "large_cache_size": 16, 00:18:21.384 "task_count": 2048, 00:18:21.384 "sequence_count": 2048, 00:18:21.384 "buf_count": 2048 00:18:21.384 } 00:18:21.384 } 00:18:21.384 ] 00:18:21.384 }, 00:18:21.384 { 00:18:21.384 "subsystem": "bdev", 00:18:21.384 "config": [ 00:18:21.384 { 00:18:21.384 "method": "bdev_set_options", 00:18:21.384 "params": { 00:18:21.384 "bdev_io_pool_size": 65535, 00:18:21.384 "bdev_io_cache_size": 256, 00:18:21.384 "bdev_auto_examine": true, 00:18:21.384 "iobuf_small_cache_size": 128, 00:18:21.384 "iobuf_large_cache_size": 16 00:18:21.384 } 00:18:21.384 }, 00:18:21.384 { 00:18:21.384 "method": "bdev_raid_set_options", 00:18:21.384 "params": { 00:18:21.384 "process_window_size_kb": 1024, 00:18:21.384 "process_max_bandwidth_mb_sec": 0 00:18:21.384 } 00:18:21.384 }, 00:18:21.384 { 00:18:21.384 "method": "bdev_iscsi_set_options", 00:18:21.384 "params": { 00:18:21.384 "timeout_sec": 30 00:18:21.384 } 00:18:21.384 }, 00:18:21.384 { 00:18:21.384 "method": "bdev_nvme_set_options", 00:18:21.384 "params": { 00:18:21.384 "action_on_timeout": "none", 00:18:21.384 "timeout_us": 0, 00:18:21.384 "timeout_admin_us": 0, 00:18:21.384 "keep_alive_timeout_ms": 10000, 00:18:21.384 "arbitration_burst": 0, 00:18:21.384 "low_priority_weight": 0, 00:18:21.384 "medium_priority_weight": 0, 00:18:21.384 "high_priority_weight": 0, 00:18:21.384 "nvme_adminq_poll_period_us": 10000, 00:18:21.384 "nvme_ioq_poll_period_us": 0, 00:18:21.384 "io_queue_requests": 0, 00:18:21.384 "delay_cmd_submit": true, 00:18:21.384 "transport_retry_count": 4, 00:18:21.384 "bdev_retry_count": 3, 00:18:21.384 "transport_ack_timeout": 0, 00:18:21.384 "ctrlr_loss_timeout_sec": 0, 00:18:21.384 "reconnect_delay_sec": 0, 00:18:21.384 "fast_io_fail_timeout_sec": 0, 00:18:21.384 "disable_auto_failback": false, 00:18:21.384 "generate_uuids": false, 00:18:21.384 "transport_tos": 0, 00:18:21.384 "nvme_error_stat": false, 00:18:21.384 "rdma_srq_size": 0, 00:18:21.384 "io_path_stat": false, 00:18:21.384 "allow_accel_sequence": false, 00:18:21.384 "rdma_max_cq_size": 0, 00:18:21.384 "rdma_cm_event_timeout_ms": 0, 00:18:21.384 "dhchap_digests": [ 00:18:21.384 "sha256", 00:18:21.384 "sha384", 00:18:21.384 "sha512" 00:18:21.384 ], 00:18:21.384 "dhchap_dhgroups": [ 00:18:21.384 "null", 00:18:21.384 "ffdhe2048", 00:18:21.384 "ffdhe3072", 00:18:21.384 "ffdhe4096", 00:18:21.384 "ffdhe6144", 00:18:21.384 "ffdhe8192" 00:18:21.384 ] 00:18:21.384 } 00:18:21.384 }, 00:18:21.384 { 00:18:21.384 "method": "bdev_nvme_set_hotplug", 00:18:21.384 "params": { 00:18:21.384 "period_us": 100000, 00:18:21.384 "enable": false 00:18:21.384 } 00:18:21.384 }, 00:18:21.384 { 00:18:21.384 "method": "bdev_malloc_create", 00:18:21.384 "params": { 00:18:21.384 "name": "malloc0", 00:18:21.384 "num_blocks": 8192, 00:18:21.384 "block_size": 4096, 00:18:21.384 "physical_block_size": 4096, 00:18:21.384 "uuid": "7ca9a65a-1e72-4cb9-a56c-78fd2eba00c9", 00:18:21.384 "optimal_io_boundary": 0, 00:18:21.384 "md_size": 0, 00:18:21.384 "dif_type": 0, 00:18:21.384 "dif_is_head_of_md": false, 00:18:21.384 "dif_pi_format": 0 00:18:21.384 } 00:18:21.384 }, 00:18:21.384 { 00:18:21.384 "method": "bdev_wait_for_examine" 00:18:21.384 } 00:18:21.384 ] 00:18:21.384 }, 00:18:21.384 { 00:18:21.384 "subsystem": "nbd", 00:18:21.384 "config": [] 00:18:21.384 }, 00:18:21.384 { 00:18:21.384 "subsystem": "scheduler", 00:18:21.384 "config": [ 00:18:21.384 { 00:18:21.384 "method": "framework_set_scheduler", 00:18:21.384 "params": { 00:18:21.384 "name": "static" 00:18:21.384 } 00:18:21.384 } 00:18:21.384 ] 00:18:21.384 }, 00:18:21.384 { 00:18:21.384 "subsystem": "nvmf", 00:18:21.384 "config": [ 00:18:21.384 { 00:18:21.384 "method": "nvmf_set_config", 00:18:21.384 "params": { 00:18:21.384 "discovery_filter": "match_any", 00:18:21.384 "admin_cmd_passthru": { 00:18:21.384 "identify_ctrlr": false 00:18:21.384 }, 00:18:21.384 "dhchap_digests": [ 00:18:21.384 "sha256", 00:18:21.384 "sha384", 00:18:21.384 "sha512" 00:18:21.384 ], 00:18:21.384 "dhchap_dhgroups": [ 00:18:21.384 "null", 00:18:21.384 "ffdhe2048", 00:18:21.384 "ffdhe3072", 00:18:21.384 "ffdhe4096", 00:18:21.384 "ffdhe6144", 00:18:21.384 "ffdhe8192" 00:18:21.384 ] 00:18:21.384 } 00:18:21.384 }, 00:18:21.384 { 00:18:21.384 "method": "nvmf_set_max_subsystems", 00:18:21.384 "params": { 00:18:21.384 "max_subsystems": 1024 00:18:21.384 } 00:18:21.384 }, 00:18:21.384 { 00:18:21.384 "method": "nvmf_set_crdt", 00:18:21.384 "params": { 00:18:21.384 "crdt1": 0, 00:18:21.384 "crdt2": 0, 00:18:21.384 "crdt3": 0 00:18:21.384 } 00:18:21.384 }, 00:18:21.384 { 00:18:21.384 "method": "nvmf_create_transport", 00:18:21.384 "params": { 00:18:21.384 "trtype": "TCP", 00:18:21.384 "max_queue_depth": 128, 00:18:21.384 "max_io_qpairs_per_ctrlr": 127, 00:18:21.384 "in_capsule_data_size": 4096, 00:18:21.384 "max_io_size": 131072, 00:18:21.384 "io_unit_size": 131072, 00:18:21.384 "max_aq_depth": 128, 00:18:21.384 "num_shared_buffers": 511, 00:18:21.384 "buf_cache_size": 4294967295, 00:18:21.384 "dif_insert_or_strip": false, 00:18:21.384 "zcopy": false, 00:18:21.384 "c2h_success": false, 00:18:21.384 "sock_priority": 0, 00:18:21.384 "abort_timeout_sec": 1, 00:18:21.384 "ack_timeout": 0, 00:18:21.384 "data_wr_pool_size": 0 00:18:21.384 } 00:18:21.384 }, 00:18:21.384 { 00:18:21.384 "method": "nvmf_create_subsystem", 00:18:21.384 "params": { 00:18:21.384 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:21.384 "allow_any_host": false, 00:18:21.384 "serial_number": "00000000000000000000", 00:18:21.384 "model_number": "SPDK bdev Controller", 00:18:21.384 "max_namespaces": 32, 00:18:21.384 "min_cntlid": 1, 00:18:21.384 "max_cntlid": 65519, 00:18:21.384 "ana_reporting": false 00:18:21.384 } 00:18:21.384 }, 00:18:21.384 { 00:18:21.384 "method": "nvmf_subsystem_add_host", 00:18:21.384 "params": { 00:18:21.384 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:21.384 "host": "nqn.2016-06.io.spdk:host1", 00:18:21.384 "psk": "key0" 00:18:21.384 } 00:18:21.384 }, 00:18:21.384 { 00:18:21.384 "method": "nvmf_subsystem_add_ns", 00:18:21.384 "params": { 00:18:21.384 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:21.384 "namespace": { 00:18:21.384 "nsid": 1, 00:18:21.384 "bdev_name": "malloc0", 00:18:21.384 "nguid": "7CA9A65A1E724CB9A56C78FD2EBA00C9", 00:18:21.384 "uuid": "7ca9a65a-1e72-4cb9-a56c-78fd2eba00c9", 00:18:21.384 "no_auto_visible": false 00:18:21.384 } 00:18:21.384 } 00:18:21.384 }, 00:18:21.385 { 00:18:21.385 "method": "nvmf_subsystem_add_listener", 00:18:21.385 "params": { 00:18:21.385 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:21.385 "listen_address": { 00:18:21.385 "trtype": "TCP", 00:18:21.385 "adrfam": "IPv4", 00:18:21.385 "traddr": "10.0.0.3", 00:18:21.385 "trsvcid": "4420" 00:18:21.385 }, 00:18:21.385 "secure_channel": false, 00:18:21.385 "sock_impl": "ssl" 00:18:21.385 } 00:18:21.385 } 00:18:21.385 ] 00:18:21.385 } 00:18:21.385 ] 00:18:21.385 }' 00:18:21.385 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:21.385 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:21.385 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:21.385 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72094 00:18:21.385 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:21.643 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72094 00:18:21.643 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72094 ']' 00:18:21.643 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:21.643 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:21.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:21.643 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:21.643 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:21.643 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:21.643 [2024-12-09 09:28:59.157565] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:18:21.643 [2024-12-09 09:28:59.157639] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:21.643 [2024-12-09 09:28:59.294722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.643 [2024-12-09 09:28:59.346400] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:21.643 [2024-12-09 09:28:59.346448] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:21.643 [2024-12-09 09:28:59.346474] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:21.643 [2024-12-09 09:28:59.346484] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:21.643 [2024-12-09 09:28:59.346492] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:21.643 [2024-12-09 09:28:59.346829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:21.901 [2024-12-09 09:28:59.503732] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:21.901 [2024-12-09 09:28:59.576432] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:21.901 [2024-12-09 09:28:59.608332] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:21.901 [2024-12-09 09:28:59.608578] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:22.468 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:22.468 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:22.468 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:22.468 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:22.468 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:22.468 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:22.468 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=72127 00:18:22.468 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 72127 /var/tmp/bdevperf.sock 00:18:22.468 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72127 ']' 00:18:22.469 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:22.469 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:22.469 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:18:22.469 "subsystems": [ 00:18:22.469 { 00:18:22.469 "subsystem": "keyring", 00:18:22.469 "config": [ 00:18:22.469 { 00:18:22.469 "method": "keyring_file_add_key", 00:18:22.469 "params": { 00:18:22.469 "name": "key0", 00:18:22.469 "path": "/tmp/tmp.VgAjhtmVhL" 00:18:22.469 } 00:18:22.469 } 00:18:22.469 ] 00:18:22.469 }, 00:18:22.469 { 00:18:22.469 "subsystem": "iobuf", 00:18:22.469 "config": [ 00:18:22.469 { 00:18:22.469 "method": "iobuf_set_options", 00:18:22.469 "params": { 00:18:22.469 "small_pool_count": 8192, 00:18:22.469 "large_pool_count": 1024, 00:18:22.469 "small_bufsize": 8192, 00:18:22.469 "large_bufsize": 135168, 00:18:22.469 "enable_numa": false 00:18:22.469 } 00:18:22.469 } 00:18:22.469 ] 00:18:22.469 }, 00:18:22.469 { 00:18:22.469 "subsystem": "sock", 00:18:22.469 "config": [ 00:18:22.469 { 00:18:22.469 "method": "sock_set_default_impl", 00:18:22.469 "params": { 00:18:22.469 "impl_name": "uring" 00:18:22.469 } 00:18:22.469 }, 00:18:22.469 { 00:18:22.469 "method": "sock_impl_set_options", 00:18:22.469 "params": { 00:18:22.469 "impl_name": "ssl", 00:18:22.469 "recv_buf_size": 4096, 00:18:22.469 "send_buf_size": 4096, 00:18:22.469 "enable_recv_pipe": true, 00:18:22.469 "enable_quickack": false, 00:18:22.469 "enable_placement_id": 0, 00:18:22.469 "enable_zerocopy_send_server": true, 00:18:22.469 "enable_zerocopy_send_client": false, 00:18:22.469 "zerocopy_threshold": 0, 00:18:22.469 "tls_version": 0, 00:18:22.469 "enable_ktls": false 00:18:22.469 } 00:18:22.469 }, 00:18:22.469 { 00:18:22.469 "method": "sock_impl_set_options", 00:18:22.469 "params": { 00:18:22.469 "impl_name": "posix", 00:18:22.469 "recv_buf_size": 2097152, 00:18:22.469 "send_buf_size": 2097152, 00:18:22.469 "enable_recv_pipe": true, 00:18:22.469 "enable_quickack": false, 00:18:22.469 "enable_placement_id": 0, 00:18:22.469 "enable_zerocopy_send_server": true, 00:18:22.469 "enable_zerocopy_send_client": false, 00:18:22.469 "zerocopy_threshold": 0, 00:18:22.469 "tls_version": 0, 00:18:22.469 "enable_ktls": false 00:18:22.469 } 00:18:22.469 }, 00:18:22.469 { 00:18:22.469 "method": "sock_impl_set_options", 00:18:22.469 "params": { 00:18:22.469 "impl_name": "uring", 00:18:22.469 "recv_buf_size": 2097152, 00:18:22.469 "send_buf_size": 2097152, 00:18:22.469 "enable_recv_pipe": true, 00:18:22.469 "enable_quickack": false, 00:18:22.469 "enable_placement_id": 0, 00:18:22.469 "enable_zerocopy_send_server": false, 00:18:22.469 "enable_zerocopy_send_client": false, 00:18:22.469 "zerocopy_threshold": 0, 00:18:22.469 "tls_version": 0, 00:18:22.469 "enable_ktls": false 00:18:22.469 } 00:18:22.469 } 00:18:22.469 ] 00:18:22.469 }, 00:18:22.469 { 00:18:22.469 "subsystem": "vmd", 00:18:22.469 "config": [] 00:18:22.469 }, 00:18:22.469 { 00:18:22.469 "subsystem": "accel", 00:18:22.469 "config": [ 00:18:22.469 { 00:18:22.469 "method": "accel_set_options", 00:18:22.469 "params": { 00:18:22.469 "small_cache_size": 128, 00:18:22.469 "large_cache_size": 16, 00:18:22.469 "task_count": 2048, 00:18:22.469 "sequence_count": 2048, 00:18:22.469 "buf_count": 2048 00:18:22.469 } 00:18:22.469 } 00:18:22.469 ] 00:18:22.469 }, 00:18:22.469 { 00:18:22.469 "subsystem": "bdev", 00:18:22.469 "config": [ 00:18:22.469 { 00:18:22.469 "method": "bdev_set_options", 00:18:22.469 "params": { 00:18:22.469 "bdev_io_pool_size": 65535, 00:18:22.469 "bdev_io_cache_size": 256, 00:18:22.469 "bdev_auto_examine": true, 00:18:22.469 "iobuf_small_cache_size": 128, 00:18:22.469 "iobuf_large_cache_size": 16 00:18:22.469 } 00:18:22.469 }, 00:18:22.469 { 00:18:22.469 "method": "bdev_raid_set_options", 00:18:22.469 "params": { 00:18:22.469 "process_window_size_kb": 1024, 00:18:22.469 "process_max_bandwidth_mb_sec": 0 00:18:22.469 } 00:18:22.469 }, 00:18:22.469 { 00:18:22.469 "method": "bdev_iscsi_set_options", 00:18:22.469 "params": { 00:18:22.469 "timeout_sec": 30 00:18:22.469 } 00:18:22.469 }, 00:18:22.469 { 00:18:22.469 "method": "bdev_nvme_set_options", 00:18:22.469 "params": { 00:18:22.469 "action_on_timeout": "none", 00:18:22.469 "timeout_us": 0, 00:18:22.469 "timeout_admin_us": 0, 00:18:22.469 "keep_alive_timeout_ms": 10000, 00:18:22.469 "arbitration_burst": 0, 00:18:22.469 "low_priority_weight": 0, 00:18:22.469 "medium_priority_weight": 0, 00:18:22.469 "high_priority_weight": 0, 00:18:22.469 "nvme_adminq_poll_period_us": 10000, 00:18:22.469 "nvme_ioq_poll_period_us": 0, 00:18:22.469 "io_queue_requests": 512, 00:18:22.469 "delay_cmd_submit": true, 00:18:22.469 "transport_retry_count": 4, 00:18:22.469 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:22.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:22.469 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:22.469 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:22.469 "bdev_retry_count": 3, 00:18:22.469 "transport_ack_timeout": 0, 00:18:22.469 "ctrlr_loss_timeout_sec": 0, 00:18:22.469 "reconnect_delay_sec": 0, 00:18:22.469 "fast_io_fail_timeout_sec": 0, 00:18:22.469 "disable_auto_failback": false, 00:18:22.469 "generate_uuids": false, 00:18:22.469 "transport_tos": 0, 00:18:22.469 "nvme_error_stat": false, 00:18:22.469 "rdma_srq_size": 0, 00:18:22.469 "io_path_stat": false, 00:18:22.469 "allow_accel_sequence": false, 00:18:22.469 "rdma_max_cq_size": 0, 00:18:22.469 "rdma_cm_event_timeout_ms": 0, 00:18:22.469 "dhchap_digests": [ 00:18:22.469 "sha256", 00:18:22.469 "sha384", 00:18:22.469 "sha512" 00:18:22.469 ], 00:18:22.469 "dhchap_dhgroups": [ 00:18:22.469 "null", 00:18:22.469 "ffdhe2048", 00:18:22.469 "ffdhe3072", 00:18:22.469 "ffdhe4096", 00:18:22.469 "ffdhe6144", 00:18:22.469 "ffdhe8192" 00:18:22.469 ] 00:18:22.469 } 00:18:22.469 }, 00:18:22.469 { 00:18:22.469 "method": "bdev_nvme_attach_controller", 00:18:22.469 "params": { 00:18:22.469 "name": "nvme0", 00:18:22.469 "trtype": "TCP", 00:18:22.469 "adrfam": "IPv4", 00:18:22.469 "traddr": "10.0.0.3", 00:18:22.469 "trsvcid": "4420", 00:18:22.469 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:22.469 "prchk_reftag": false, 00:18:22.469 "prchk_guard": false, 00:18:22.469 "ctrlr_loss_timeout_sec": 0, 00:18:22.469 "reconnect_delay_sec": 0, 00:18:22.469 "fast_io_fail_timeout_sec": 0, 00:18:22.469 "psk": "key0", 00:18:22.469 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:22.469 "hdgst": false, 00:18:22.469 "ddgst": false, 00:18:22.469 "multipath": "multipath" 00:18:22.469 } 00:18:22.469 }, 00:18:22.469 { 00:18:22.469 "method": "bdev_nvme_set_hotplug", 00:18:22.469 "params": { 00:18:22.469 "period_us": 100000, 00:18:22.469 "enable": false 00:18:22.469 } 00:18:22.469 }, 00:18:22.469 { 00:18:22.469 "method": "bdev_enable_histogram", 00:18:22.469 "params": { 00:18:22.469 "name": "nvme0n1", 00:18:22.469 "enable": true 00:18:22.469 } 00:18:22.469 }, 00:18:22.469 { 00:18:22.469 "method": "bdev_wait_for_examine" 00:18:22.469 } 00:18:22.469 ] 00:18:22.469 }, 00:18:22.470 { 00:18:22.470 "subsystem": "nbd", 00:18:22.470 "config": [] 00:18:22.470 } 00:18:22.470 ] 00:18:22.470 }' 00:18:22.470 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:22.470 [2024-12-09 09:29:00.187625] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:18:22.470 [2024-12-09 09:29:00.187858] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72127 ] 00:18:22.728 [2024-12-09 09:29:00.343024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.728 [2024-12-09 09:29:00.394896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:22.986 [2024-12-09 09:29:00.517827] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:22.986 [2024-12-09 09:29:00.559999] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:23.552 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:23.552 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:23.552 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:23.552 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:18:23.810 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.810 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:23.810 Running I/O for 1 seconds... 00:18:24.744 5898.00 IOPS, 23.04 MiB/s 00:18:24.744 Latency(us) 00:18:24.744 [2024-12-09T09:29:02.467Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.744 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:24.744 Verification LBA range: start 0x0 length 0x2000 00:18:24.744 nvme0n1 : 1.01 5957.01 23.27 0.00 0.00 21341.07 4026.91 15791.81 00:18:24.744 [2024-12-09T09:29:02.467Z] =================================================================================================================== 00:18:24.744 [2024-12-09T09:29:02.467Z] Total : 5957.01 23.27 0.00 0.00 21341.07 4026.91 15791.81 00:18:24.744 { 00:18:24.744 "results": [ 00:18:24.744 { 00:18:24.744 "job": "nvme0n1", 00:18:24.744 "core_mask": "0x2", 00:18:24.744 "workload": "verify", 00:18:24.744 "status": "finished", 00:18:24.744 "verify_range": { 00:18:24.744 "start": 0, 00:18:24.744 "length": 8192 00:18:24.744 }, 00:18:24.744 "queue_depth": 128, 00:18:24.744 "io_size": 4096, 00:18:24.744 "runtime": 1.01175, 00:18:24.744 "iops": 5957.0051890289105, 00:18:24.744 "mibps": 23.269551519644182, 00:18:24.744 "io_failed": 0, 00:18:24.744 "io_timeout": 0, 00:18:24.744 "avg_latency_us": 21341.07370487425, 00:18:24.744 "min_latency_us": 4026.910843373494, 00:18:24.744 "max_latency_us": 15791.807228915663 00:18:24.744 } 00:18:24.744 ], 00:18:24.744 "core_count": 1 00:18:24.744 } 00:18:24.744 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:18:24.744 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:18:24.744 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:24.744 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:18:24.744 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:18:24.744 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:18:24.744 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:24.744 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:18:24.744 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:18:24.744 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:18:24.744 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:24.744 nvmf_trace.0 00:18:25.002 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:18:25.002 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 72127 00:18:25.002 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72127 ']' 00:18:25.002 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72127 00:18:25.002 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:25.002 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:25.002 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72127 00:18:25.002 killing process with pid 72127 00:18:25.002 Received shutdown signal, test time was about 1.000000 seconds 00:18:25.002 00:18:25.002 Latency(us) 00:18:25.002 [2024-12-09T09:29:02.725Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.002 [2024-12-09T09:29:02.725Z] =================================================================================================================== 00:18:25.002 [2024-12-09T09:29:02.725Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:25.002 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:25.002 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:25.002 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72127' 00:18:25.002 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72127 00:18:25.002 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72127 00:18:25.261 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:25.261 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:25.261 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:18:25.261 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:25.261 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:18:25.261 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:25.261 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:25.261 rmmod nvme_tcp 00:18:25.261 rmmod nvme_fabrics 00:18:25.261 rmmod nvme_keyring 00:18:25.261 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:25.261 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:18:25.261 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:18:25.261 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 72094 ']' 00:18:25.261 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 72094 00:18:25.261 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72094 ']' 00:18:25.261 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72094 00:18:25.261 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:25.261 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:25.261 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72094 00:18:25.261 killing process with pid 72094 00:18:25.261 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:25.261 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:25.261 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72094' 00:18:25.261 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72094 00:18:25.261 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72094 00:18:25.520 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:25.520 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:25.520 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:25.520 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:18:25.520 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:18:25.520 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:25.520 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:18:25.520 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:25.520 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:25.520 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:25.520 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:25.520 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:25.520 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:25.520 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:25.520 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:25.520 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:25.520 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:25.520 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:25.778 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:25.778 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:25.778 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:25.778 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:25.778 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:25.778 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:25.778 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:25.778 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:25.778 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:18:25.778 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.hu81noV0jy /tmp/tmp.gOVsKITlaA /tmp/tmp.VgAjhtmVhL 00:18:25.778 00:18:25.778 real 1m25.816s 00:18:25.778 user 2m8.125s 00:18:25.778 sys 0m33.103s 00:18:25.778 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:25.778 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:25.778 ************************************ 00:18:25.778 END TEST nvmf_tls 00:18:25.778 ************************************ 00:18:25.778 09:29:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:25.778 09:29:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:25.778 09:29:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:25.778 09:29:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:25.778 ************************************ 00:18:25.778 START TEST nvmf_fips 00:18:25.778 ************************************ 00:18:25.778 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:26.036 * Looking for test storage... 00:18:26.036 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:18:26.036 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:26.036 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:18:26.036 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:26.036 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:26.036 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:26.036 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:26.036 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:26.036 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:26.036 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:26.036 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:26.036 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:26.036 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:18:26.036 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:18:26.036 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:18:26.036 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:26.036 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:26.036 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:26.036 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:26.036 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:26.036 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:26.036 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:26.036 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:26.036 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:26.036 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:26.036 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:18:26.036 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:18:26.036 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:26.036 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:18:26.036 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:18:26.036 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:26.036 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:26.036 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:18:26.036 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:26.036 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:26.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.037 --rc genhtml_branch_coverage=1 00:18:26.037 --rc genhtml_function_coverage=1 00:18:26.037 --rc genhtml_legend=1 00:18:26.037 --rc geninfo_all_blocks=1 00:18:26.037 --rc geninfo_unexecuted_blocks=1 00:18:26.037 00:18:26.037 ' 00:18:26.037 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:26.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.037 --rc genhtml_branch_coverage=1 00:18:26.037 --rc genhtml_function_coverage=1 00:18:26.037 --rc genhtml_legend=1 00:18:26.037 --rc geninfo_all_blocks=1 00:18:26.037 --rc geninfo_unexecuted_blocks=1 00:18:26.037 00:18:26.037 ' 00:18:26.037 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:26.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.037 --rc genhtml_branch_coverage=1 00:18:26.037 --rc genhtml_function_coverage=1 00:18:26.037 --rc genhtml_legend=1 00:18:26.037 --rc geninfo_all_blocks=1 00:18:26.037 --rc geninfo_unexecuted_blocks=1 00:18:26.037 00:18:26.037 ' 00:18:26.037 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:26.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.037 --rc genhtml_branch_coverage=1 00:18:26.037 --rc genhtml_function_coverage=1 00:18:26.037 --rc genhtml_legend=1 00:18:26.037 --rc geninfo_all_blocks=1 00:18:26.037 --rc geninfo_unexecuted_blocks=1 00:18:26.037 00:18:26.037 ' 00:18:26.037 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:26.037 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:26.037 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:26.037 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:26.037 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:26.037 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:26.037 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:26.037 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:26.037 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:26.037 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:26.037 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:26.037 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:26.296 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:18:26.296 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:26.297 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:18:26.297 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:18:26.297 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:18:26.297 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:26.297 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:18:26.297 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:26.297 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:18:26.297 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:26.297 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:18:26.297 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:26.297 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:18:26.297 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:18:26.297 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:18:26.297 Error setting digest 00:18:26.297 40A23787887F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:18:26.297 40A23787887F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:18:26.297 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:18:26.297 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:26.297 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:26.297 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:26.297 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:18:26.297 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:26.297 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:26.297 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:26.297 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:26.297 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:26.297 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:26.297 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:26.297 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:26.297 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:26.297 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:26.297 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:26.297 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:26.297 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:26.297 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:26.297 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:26.297 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:26.297 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:26.297 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:26.297 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:26.297 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:26.297 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:26.297 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:26.297 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:26.297 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:26.297 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:26.297 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:26.297 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:26.297 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:26.297 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:26.297 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:26.297 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:26.297 Cannot find device "nvmf_init_br" 00:18:26.297 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:18:26.297 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:26.297 Cannot find device "nvmf_init_br2" 00:18:26.297 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:18:26.297 09:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:26.297 Cannot find device "nvmf_tgt_br" 00:18:26.297 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:18:26.297 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:26.555 Cannot find device "nvmf_tgt_br2" 00:18:26.555 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:18:26.555 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:26.555 Cannot find device "nvmf_init_br" 00:18:26.555 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:18:26.555 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:26.555 Cannot find device "nvmf_init_br2" 00:18:26.555 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:18:26.555 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:26.555 Cannot find device "nvmf_tgt_br" 00:18:26.555 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:18:26.555 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:26.555 Cannot find device "nvmf_tgt_br2" 00:18:26.555 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:18:26.555 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:26.555 Cannot find device "nvmf_br" 00:18:26.555 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:18:26.555 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:26.555 Cannot find device "nvmf_init_if" 00:18:26.555 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:18:26.555 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:26.555 Cannot find device "nvmf_init_if2" 00:18:26.555 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:18:26.555 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:26.555 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:26.555 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:18:26.555 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:26.555 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:26.555 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:18:26.555 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:26.555 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:26.555 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:26.555 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:26.555 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:26.556 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:26.556 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:26.556 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:26.556 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:26.556 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:26.813 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:26.813 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:26.813 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:26.813 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:26.813 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:26.813 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:26.813 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:26.813 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:26.813 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:26.813 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:26.813 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:26.813 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:26.813 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:26.813 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:26.813 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:26.813 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:26.813 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:26.813 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:26.813 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:26.813 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:26.813 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:26.813 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:26.813 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:26.813 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:26.813 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.104 ms 00:18:26.813 00:18:26.813 --- 10.0.0.3 ping statistics --- 00:18:26.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:26.813 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:18:26.813 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:26.813 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:26.813 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.078 ms 00:18:26.813 00:18:26.813 --- 10.0.0.4 ping statistics --- 00:18:26.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:26.813 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:18:26.813 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:26.813 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:26.813 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:18:26.813 00:18:26.813 --- 10.0.0.1 ping statistics --- 00:18:26.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:26.813 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:18:26.813 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:26.813 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:26.813 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:18:26.813 00:18:26.813 --- 10.0.0.2 ping statistics --- 00:18:26.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:26.814 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:18:26.814 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:26.814 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:18:26.814 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:26.814 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:26.814 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:26.814 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:26.814 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:26.814 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:26.814 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:26.814 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:18:26.814 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:26.814 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:26.814 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:26.814 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=72445 00:18:26.814 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 72445 00:18:26.814 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 72445 ']' 00:18:26.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:26.814 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:26.814 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:26.814 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:26.814 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:26.814 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:26.814 09:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:27.071 [2024-12-09 09:29:04.571542] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:18:27.071 [2024-12-09 09:29:04.571612] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:27.071 [2024-12-09 09:29:04.711490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.071 [2024-12-09 09:29:04.761833] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:27.071 [2024-12-09 09:29:04.761885] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:27.071 [2024-12-09 09:29:04.761897] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:27.071 [2024-12-09 09:29:04.761907] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:27.071 [2024-12-09 09:29:04.761917] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:27.071 [2024-12-09 09:29:04.762214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:27.341 [2024-12-09 09:29:04.805056] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:27.907 09:29:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:27.907 09:29:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:18:27.907 09:29:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:27.907 09:29:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:27.907 09:29:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:27.907 09:29:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:27.907 09:29:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:18:27.907 09:29:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:27.907 09:29:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:18:27.907 09:29:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.BgN 00:18:27.907 09:29:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:27.907 09:29:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.BgN 00:18:27.907 09:29:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.BgN 00:18:27.907 09:29:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.BgN 00:18:27.907 09:29:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:28.165 [2024-12-09 09:29:05.790120] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:28.165 [2024-12-09 09:29:05.806054] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:28.165 [2024-12-09 09:29:05.806258] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:28.165 malloc0 00:18:28.165 09:29:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:28.165 09:29:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=72487 00:18:28.165 09:29:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:28.165 09:29:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 72487 /var/tmp/bdevperf.sock 00:18:28.165 09:29:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 72487 ']' 00:18:28.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:28.165 09:29:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:28.165 09:29:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:28.165 09:29:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:28.165 09:29:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:28.165 09:29:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:28.424 [2024-12-09 09:29:05.944396] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:18:28.424 [2024-12-09 09:29:05.944627] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72487 ] 00:18:28.424 [2024-12-09 09:29:06.082542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.424 [2024-12-09 09:29:06.130656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:28.683 [2024-12-09 09:29:06.172599] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:29.250 09:29:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:29.250 09:29:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:18:29.250 09:29:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.BgN 00:18:29.509 09:29:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:29.767 [2024-12-09 09:29:07.244652] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:29.767 TLSTESTn1 00:18:29.767 09:29:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:29.767 Running I/O for 10 seconds... 00:18:32.082 5287.00 IOPS, 20.65 MiB/s [2024-12-09T09:29:10.742Z] 5179.00 IOPS, 20.23 MiB/s [2024-12-09T09:29:11.678Z] 5153.67 IOPS, 20.13 MiB/s [2024-12-09T09:29:12.613Z] 5126.25 IOPS, 20.02 MiB/s [2024-12-09T09:29:13.549Z] 5111.40 IOPS, 19.97 MiB/s [2024-12-09T09:29:14.497Z] 5108.17 IOPS, 19.95 MiB/s [2024-12-09T09:29:15.874Z] 5102.43 IOPS, 19.93 MiB/s [2024-12-09T09:29:16.442Z] 5111.12 IOPS, 19.97 MiB/s [2024-12-09T09:29:17.826Z] 5187.67 IOPS, 20.26 MiB/s [2024-12-09T09:29:17.826Z] 5252.30 IOPS, 20.52 MiB/s 00:18:40.103 Latency(us) 00:18:40.103 [2024-12-09T09:29:17.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.103 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:40.103 Verification LBA range: start 0x0 length 0x2000 00:18:40.103 TLSTESTn1 : 10.01 5258.76 20.54 0.00 0.00 24306.59 4105.87 24740.50 00:18:40.103 [2024-12-09T09:29:17.826Z] =================================================================================================================== 00:18:40.103 [2024-12-09T09:29:17.826Z] Total : 5258.76 20.54 0.00 0.00 24306.59 4105.87 24740.50 00:18:40.103 { 00:18:40.103 "results": [ 00:18:40.103 { 00:18:40.103 "job": "TLSTESTn1", 00:18:40.103 "core_mask": "0x4", 00:18:40.103 "workload": "verify", 00:18:40.103 "status": "finished", 00:18:40.103 "verify_range": { 00:18:40.103 "start": 0, 00:18:40.103 "length": 8192 00:18:40.103 }, 00:18:40.103 "queue_depth": 128, 00:18:40.103 "io_size": 4096, 00:18:40.103 "runtime": 10.012054, 00:18:40.103 "iops": 5258.761089382858, 00:18:40.103 "mibps": 20.54203550540179, 00:18:40.103 "io_failed": 0, 00:18:40.103 "io_timeout": 0, 00:18:40.103 "avg_latency_us": 24306.585962958783, 00:18:40.103 "min_latency_us": 4105.869879518073, 00:18:40.103 "max_latency_us": 24740.49799196787 00:18:40.103 } 00:18:40.103 ], 00:18:40.103 "core_count": 1 00:18:40.103 } 00:18:40.103 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:18:40.103 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:18:40.103 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:18:40.103 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:18:40.103 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:18:40.103 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:40.103 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:18:40.103 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:18:40.103 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:18:40.104 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:40.104 nvmf_trace.0 00:18:40.104 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:18:40.104 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 72487 00:18:40.104 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 72487 ']' 00:18:40.104 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 72487 00:18:40.104 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:18:40.104 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:40.104 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72487 00:18:40.104 killing process with pid 72487 00:18:40.104 Received shutdown signal, test time was about 10.000000 seconds 00:18:40.104 00:18:40.104 Latency(us) 00:18:40.104 [2024-12-09T09:29:17.827Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.104 [2024-12-09T09:29:17.827Z] =================================================================================================================== 00:18:40.104 [2024-12-09T09:29:17.827Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:40.104 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:40.104 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:40.104 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72487' 00:18:40.104 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 72487 00:18:40.104 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 72487 00:18:40.104 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:18:40.104 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:40.104 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:18:40.363 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:40.363 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:18:40.363 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:40.363 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:40.363 rmmod nvme_tcp 00:18:40.363 rmmod nvme_fabrics 00:18:40.363 rmmod nvme_keyring 00:18:40.363 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:40.363 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:18:40.363 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:18:40.363 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 72445 ']' 00:18:40.363 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 72445 00:18:40.363 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 72445 ']' 00:18:40.363 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 72445 00:18:40.363 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:18:40.363 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:40.363 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72445 00:18:40.363 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:40.363 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:40.363 killing process with pid 72445 00:18:40.363 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72445' 00:18:40.363 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 72445 00:18:40.363 09:29:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 72445 00:18:40.623 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:40.623 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:40.623 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:40.623 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:18:40.623 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:18:40.623 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:40.623 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:18:40.623 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:40.623 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:40.623 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:40.623 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:40.623 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:40.623 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:40.623 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:40.623 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:40.623 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:40.623 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:40.623 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:40.623 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:40.623 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:40.623 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:40.882 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:40.882 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:40.882 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.882 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:40.882 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:40.882 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:18:40.882 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.BgN 00:18:40.882 00:18:40.882 real 0m14.963s 00:18:40.882 user 0m18.743s 00:18:40.882 sys 0m6.883s 00:18:40.882 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:40.882 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:40.882 ************************************ 00:18:40.882 END TEST nvmf_fips 00:18:40.882 ************************************ 00:18:40.882 09:29:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:18:40.882 09:29:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:40.882 09:29:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:40.882 09:29:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:40.882 ************************************ 00:18:40.882 START TEST nvmf_control_msg_list 00:18:40.882 ************************************ 00:18:40.882 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:18:41.142 * Looking for test storage... 00:18:41.142 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:41.142 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:41.142 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:41.142 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:18:41.142 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:41.142 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:41.142 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:41.142 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:41.142 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:18:41.142 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:18:41.142 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:18:41.142 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:18:41.142 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:18:41.142 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:18:41.142 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:18:41.142 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:41.142 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:18:41.142 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:18:41.142 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:41.142 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:41.142 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:18:41.142 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:18:41.142 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:41.142 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:18:41.142 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:18:41.142 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:18:41.142 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:18:41.142 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:41.142 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:18:41.142 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:18:41.142 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:41.142 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:41.142 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:18:41.142 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:41.142 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:41.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.142 --rc genhtml_branch_coverage=1 00:18:41.142 --rc genhtml_function_coverage=1 00:18:41.142 --rc genhtml_legend=1 00:18:41.142 --rc geninfo_all_blocks=1 00:18:41.142 --rc geninfo_unexecuted_blocks=1 00:18:41.142 00:18:41.142 ' 00:18:41.142 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:41.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.142 --rc genhtml_branch_coverage=1 00:18:41.142 --rc genhtml_function_coverage=1 00:18:41.142 --rc genhtml_legend=1 00:18:41.142 --rc geninfo_all_blocks=1 00:18:41.142 --rc geninfo_unexecuted_blocks=1 00:18:41.142 00:18:41.142 ' 00:18:41.142 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:41.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.142 --rc genhtml_branch_coverage=1 00:18:41.142 --rc genhtml_function_coverage=1 00:18:41.142 --rc genhtml_legend=1 00:18:41.142 --rc geninfo_all_blocks=1 00:18:41.142 --rc geninfo_unexecuted_blocks=1 00:18:41.142 00:18:41.142 ' 00:18:41.142 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:41.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.142 --rc genhtml_branch_coverage=1 00:18:41.142 --rc genhtml_function_coverage=1 00:18:41.142 --rc genhtml_legend=1 00:18:41.142 --rc geninfo_all_blocks=1 00:18:41.142 --rc geninfo_unexecuted_blocks=1 00:18:41.142 00:18:41.142 ' 00:18:41.142 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:41.142 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:18:41.142 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:41.143 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:41.143 Cannot find device "nvmf_init_br" 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:41.143 Cannot find device "nvmf_init_br2" 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:18:41.143 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:41.403 Cannot find device "nvmf_tgt_br" 00:18:41.403 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:18:41.403 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:41.403 Cannot find device "nvmf_tgt_br2" 00:18:41.403 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:18:41.403 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:41.403 Cannot find device "nvmf_init_br" 00:18:41.403 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:18:41.403 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:41.403 Cannot find device "nvmf_init_br2" 00:18:41.403 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:18:41.403 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:41.403 Cannot find device "nvmf_tgt_br" 00:18:41.403 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:18:41.403 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:41.403 Cannot find device "nvmf_tgt_br2" 00:18:41.403 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:18:41.403 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:41.403 Cannot find device "nvmf_br" 00:18:41.403 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:18:41.403 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:41.403 Cannot find device "nvmf_init_if" 00:18:41.403 09:29:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:18:41.403 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:41.403 Cannot find device "nvmf_init_if2" 00:18:41.403 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:18:41.403 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:41.403 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:41.403 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:18:41.403 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:41.403 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:41.403 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:18:41.403 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:41.403 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:41.403 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:41.403 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:41.403 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:41.403 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:41.403 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:41.403 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:41.403 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:41.403 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:41.663 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:41.663 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:41.663 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:41.663 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:41.663 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:41.663 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:41.663 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:41.663 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:41.663 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:41.663 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:41.663 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:41.663 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:41.663 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:41.663 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:41.663 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:41.663 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:41.663 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:41.663 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:41.663 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:41.663 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:41.663 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:41.663 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:41.663 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:41.663 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:41.663 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:18:41.663 00:18:41.663 --- 10.0.0.3 ping statistics --- 00:18:41.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.663 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:18:41.663 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:41.663 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:41.663 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.035 ms 00:18:41.663 00:18:41.663 --- 10.0.0.4 ping statistics --- 00:18:41.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.663 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:18:41.663 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:41.663 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:41.663 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:18:41.663 00:18:41.663 --- 10.0.0.1 ping statistics --- 00:18:41.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.663 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:18:41.663 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:41.663 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:41.663 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:18:41.663 00:18:41.663 --- 10.0.0.2 ping statistics --- 00:18:41.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.663 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:18:41.663 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:41.663 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:18:41.663 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:41.663 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:41.663 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:41.663 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:41.663 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:41.663 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:41.663 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:41.663 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:18:41.663 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:41.663 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:41.663 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:41.663 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=72885 00:18:41.663 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:41.664 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 72885 00:18:41.664 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 72885 ']' 00:18:41.664 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:41.664 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:41.664 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:41.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:41.664 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:41.664 09:29:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:41.923 [2024-12-09 09:29:19.430621] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:18:41.923 [2024-12-09 09:29:19.430683] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:41.923 [2024-12-09 09:29:19.582386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.923 [2024-12-09 09:29:19.631043] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:41.923 [2024-12-09 09:29:19.631091] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:41.923 [2024-12-09 09:29:19.631102] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:41.923 [2024-12-09 09:29:19.631110] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:41.923 [2024-12-09 09:29:19.631118] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:41.923 [2024-12-09 09:29:19.631382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:42.182 [2024-12-09 09:29:19.672267] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:42.750 09:29:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:42.750 09:29:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:18:42.750 09:29:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:42.750 09:29:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:42.750 09:29:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:42.750 09:29:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:42.750 09:29:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:18:42.750 09:29:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:18:42.750 09:29:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:18:42.750 09:29:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.750 09:29:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:42.750 [2024-12-09 09:29:20.399907] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:42.750 09:29:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.750 09:29:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:18:42.751 09:29:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.751 09:29:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:42.751 09:29:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.751 09:29:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:18:42.751 09:29:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.751 09:29:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:42.751 Malloc0 00:18:42.751 09:29:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.751 09:29:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:18:42.751 09:29:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.751 09:29:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:42.751 09:29:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.751 09:29:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:42.751 09:29:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.751 09:29:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:42.751 [2024-12-09 09:29:20.452588] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:42.751 09:29:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.751 09:29:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:42.751 09:29:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=72917 00:18:42.751 09:29:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=72918 00:18:42.751 09:29:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:42.751 09:29:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=72919 00:18:42.751 09:29:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:42.751 09:29:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 72917 00:18:43.011 [2024-12-09 09:29:20.642598] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:43.011 [2024-12-09 09:29:20.653093] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:43.011 [2024-12-09 09:29:20.653301] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:43.949 Initializing NVMe Controllers 00:18:43.949 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:18:43.949 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:18:43.949 Initialization complete. Launching workers. 00:18:43.949 ======================================================== 00:18:43.949 Latency(us) 00:18:43.949 Device Information : IOPS MiB/s Average min max 00:18:43.949 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 4375.00 17.09 228.29 102.77 769.04 00:18:43.949 ======================================================== 00:18:43.949 Total : 4375.00 17.09 228.29 102.77 769.04 00:18:43.949 00:18:44.209 Initializing NVMe Controllers 00:18:44.209 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:18:44.209 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:18:44.209 Initialization complete. Launching workers. 00:18:44.209 ======================================================== 00:18:44.209 Latency(us) 00:18:44.209 Device Information : IOPS MiB/s Average min max 00:18:44.209 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 4404.98 17.21 226.76 147.57 764.37 00:18:44.209 ======================================================== 00:18:44.209 Total : 4404.98 17.21 226.76 147.57 764.37 00:18:44.209 00:18:44.209 Initializing NVMe Controllers 00:18:44.209 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:18:44.209 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:18:44.209 Initialization complete. Launching workers. 00:18:44.209 ======================================================== 00:18:44.209 Latency(us) 00:18:44.209 Device Information : IOPS MiB/s Average min max 00:18:44.209 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 4424.92 17.28 225.75 148.13 776.96 00:18:44.209 ======================================================== 00:18:44.209 Total : 4424.92 17.28 225.75 148.13 776.96 00:18:44.209 00:18:44.209 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 72918 00:18:44.209 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 72919 00:18:44.209 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:18:44.209 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:18:44.209 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:44.209 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:18:44.209 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:44.209 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:18:44.209 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:44.209 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:44.209 rmmod nvme_tcp 00:18:44.209 rmmod nvme_fabrics 00:18:44.209 rmmod nvme_keyring 00:18:44.209 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:44.209 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:18:44.209 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:18:44.209 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 72885 ']' 00:18:44.209 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 72885 00:18:44.209 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 72885 ']' 00:18:44.209 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 72885 00:18:44.209 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:18:44.209 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:44.209 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72885 00:18:44.209 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:44.209 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:44.209 killing process with pid 72885 00:18:44.209 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72885' 00:18:44.209 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 72885 00:18:44.209 09:29:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 72885 00:18:44.468 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:44.469 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:44.469 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:44.469 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:18:44.469 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:18:44.469 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:18:44.469 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:44.469 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:44.469 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:44.469 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:44.469 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:44.469 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:44.469 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:44.469 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:44.469 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:44.469 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:44.469 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:44.469 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:44.469 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:44.727 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:44.727 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:44.727 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:44.727 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:44.727 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.727 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:44.727 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:44.727 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:18:44.727 00:18:44.727 real 0m3.810s 00:18:44.727 user 0m5.417s 00:18:44.727 sys 0m1.758s 00:18:44.727 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:44.727 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:44.727 ************************************ 00:18:44.727 END TEST nvmf_control_msg_list 00:18:44.727 ************************************ 00:18:44.727 09:29:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:18:44.727 09:29:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:44.727 09:29:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:44.727 09:29:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:44.727 ************************************ 00:18:44.727 START TEST nvmf_wait_for_buf 00:18:44.727 ************************************ 00:18:44.727 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:18:44.987 * Looking for test storage... 00:18:44.987 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:44.987 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:44.987 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:18:44.987 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:44.987 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:44.987 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:44.987 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:44.987 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:44.987 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:18:44.987 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:18:44.987 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:18:44.987 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:18:44.987 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:18:44.987 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:18:44.987 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:18:44.987 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:44.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.988 --rc genhtml_branch_coverage=1 00:18:44.988 --rc genhtml_function_coverage=1 00:18:44.988 --rc genhtml_legend=1 00:18:44.988 --rc geninfo_all_blocks=1 00:18:44.988 --rc geninfo_unexecuted_blocks=1 00:18:44.988 00:18:44.988 ' 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:44.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.988 --rc genhtml_branch_coverage=1 00:18:44.988 --rc genhtml_function_coverage=1 00:18:44.988 --rc genhtml_legend=1 00:18:44.988 --rc geninfo_all_blocks=1 00:18:44.988 --rc geninfo_unexecuted_blocks=1 00:18:44.988 00:18:44.988 ' 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:44.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.988 --rc genhtml_branch_coverage=1 00:18:44.988 --rc genhtml_function_coverage=1 00:18:44.988 --rc genhtml_legend=1 00:18:44.988 --rc geninfo_all_blocks=1 00:18:44.988 --rc geninfo_unexecuted_blocks=1 00:18:44.988 00:18:44.988 ' 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:44.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.988 --rc genhtml_branch_coverage=1 00:18:44.988 --rc genhtml_function_coverage=1 00:18:44.988 --rc genhtml_legend=1 00:18:44.988 --rc geninfo_all_blocks=1 00:18:44.988 --rc geninfo_unexecuted_blocks=1 00:18:44.988 00:18:44.988 ' 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:44.988 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:44.989 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:44.989 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:44.989 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:44.989 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:44.989 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:18:44.989 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:44.989 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:44.989 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:44.989 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:44.989 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:44.989 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.989 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:44.989 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:44.989 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:44.989 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:44.989 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:44.989 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:44.989 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:44.989 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:44.989 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:44.989 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:44.989 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:44.989 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:44.989 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:44.989 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:44.989 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:44.989 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:44.989 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:44.989 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:44.989 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:44.989 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:44.989 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:44.989 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:44.989 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:44.989 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:44.989 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:45.248 Cannot find device "nvmf_init_br" 00:18:45.248 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:18:45.248 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:45.248 Cannot find device "nvmf_init_br2" 00:18:45.248 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:18:45.248 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:45.248 Cannot find device "nvmf_tgt_br" 00:18:45.248 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:18:45.248 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:45.248 Cannot find device "nvmf_tgt_br2" 00:18:45.248 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:18:45.248 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:45.248 Cannot find device "nvmf_init_br" 00:18:45.248 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:18:45.248 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:45.248 Cannot find device "nvmf_init_br2" 00:18:45.248 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:18:45.248 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:45.248 Cannot find device "nvmf_tgt_br" 00:18:45.248 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:18:45.248 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:45.248 Cannot find device "nvmf_tgt_br2" 00:18:45.248 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:18:45.248 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:45.248 Cannot find device "nvmf_br" 00:18:45.249 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:18:45.249 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:45.249 Cannot find device "nvmf_init_if" 00:18:45.249 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:18:45.249 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:45.249 Cannot find device "nvmf_init_if2" 00:18:45.249 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:18:45.249 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:45.249 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:45.249 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:18:45.249 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:45.249 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:45.249 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:18:45.249 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:45.249 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:45.249 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:45.249 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:45.249 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:45.507 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:45.507 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:45.507 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:45.507 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:45.507 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:45.507 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:45.507 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:45.507 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:45.507 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:45.507 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:45.507 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:45.507 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:45.507 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:45.507 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:45.507 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:45.507 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:45.507 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:45.507 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:45.507 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:45.507 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:45.507 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:45.507 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:45.507 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:45.507 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:45.508 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:45.508 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:45.508 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:45.508 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:45.508 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:45.508 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:18:45.508 00:18:45.508 --- 10.0.0.3 ping statistics --- 00:18:45.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.508 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:18:45.508 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:45.508 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:45.508 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.083 ms 00:18:45.508 00:18:45.508 --- 10.0.0.4 ping statistics --- 00:18:45.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.508 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:18:45.508 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:45.508 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:45.508 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:18:45.508 00:18:45.508 --- 10.0.0.1 ping statistics --- 00:18:45.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.508 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:18:45.508 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:45.508 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:45.508 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:18:45.508 00:18:45.508 --- 10.0.0.2 ping statistics --- 00:18:45.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.508 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:18:45.508 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:45.508 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:18:45.508 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:45.508 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:45.508 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:45.508 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:45.508 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:45.508 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:45.508 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:45.766 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:18:45.766 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:45.766 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:45.766 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:45.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.766 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=73160 00:18:45.766 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 73160 00:18:45.766 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 73160 ']' 00:18:45.766 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.766 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:45.766 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.766 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:45.766 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:45.766 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:18:45.766 [2024-12-09 09:29:23.318337] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:18:45.766 [2024-12-09 09:29:23.318396] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:45.766 [2024-12-09 09:29:23.471014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.024 [2024-12-09 09:29:23.511366] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:46.024 [2024-12-09 09:29:23.511413] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:46.024 [2024-12-09 09:29:23.511423] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:46.024 [2024-12-09 09:29:23.511431] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:46.024 [2024-12-09 09:29:23.511438] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:46.024 [2024-12-09 09:29:23.511719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.592 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:46.592 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:18:46.592 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:46.592 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:46.592 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:46.592 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:46.592 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:18:46.592 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:18:46.592 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:18:46.592 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.592 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:46.592 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.592 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:18:46.592 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.592 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:46.592 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.592 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:18:46.592 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.592 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:46.592 [2024-12-09 09:29:24.293007] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:46.852 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.852 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:18:46.852 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.852 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:46.852 Malloc0 00:18:46.852 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.852 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:18:46.852 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.852 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:46.852 [2024-12-09 09:29:24.350408] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:46.852 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.852 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:18:46.852 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.852 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:46.852 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.852 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:18:46.852 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.852 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:46.852 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.852 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:46.852 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.852 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:46.852 [2024-12-09 09:29:24.382451] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:46.852 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.852 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:47.110 [2024-12-09 09:29:24.580575] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:48.485 Initializing NVMe Controllers 00:18:48.485 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:18:48.485 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:18:48.485 Initialization complete. Launching workers. 00:18:48.485 ======================================================== 00:18:48.485 Latency(us) 00:18:48.485 Device Information : IOPS MiB/s Average min max 00:18:48.485 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 499.18 62.40 8013.15 7933.63 8132.97 00:18:48.485 ======================================================== 00:18:48.485 Total : 499.18 62.40 8013.15 7933.63 8132.97 00:18:48.485 00:18:48.485 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:18:48.485 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.485 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:18:48.485 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:48.485 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.485 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4750 00:18:48.485 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4750 -eq 0 ]] 00:18:48.485 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:18:48.485 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:18:48.485 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:48.485 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:18:48.485 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:48.485 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:18:48.485 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:48.485 09:29:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:48.485 rmmod nvme_tcp 00:18:48.485 rmmod nvme_fabrics 00:18:48.485 rmmod nvme_keyring 00:18:48.485 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:48.485 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:18:48.485 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:18:48.485 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 73160 ']' 00:18:48.485 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 73160 00:18:48.485 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 73160 ']' 00:18:48.485 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 73160 00:18:48.485 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:18:48.485 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:48.485 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73160 00:18:48.485 killing process with pid 73160 00:18:48.485 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:48.485 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:48.485 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73160' 00:18:48.485 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 73160 00:18:48.485 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 73160 00:18:48.743 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:48.743 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:48.743 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:48.743 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:18:48.743 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:18:48.743 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:18:48.743 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:48.743 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:48.743 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:48.743 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:48.743 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:48.743 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:48.743 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:48.743 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:48.743 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:48.743 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:48.743 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:48.743 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:48.743 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:48.743 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:48.743 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:49.004 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:49.004 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:49.004 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:49.004 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:49.004 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.004 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:18:49.004 00:18:49.004 real 0m4.156s 00:18:49.004 user 0m3.287s 00:18:49.004 sys 0m1.112s 00:18:49.004 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:49.004 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:49.004 ************************************ 00:18:49.004 END TEST nvmf_wait_for_buf 00:18:49.004 ************************************ 00:18:49.004 09:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:18:49.004 09:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:18:49.004 09:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:18:49.004 09:29:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:49.004 09:29:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:49.004 09:29:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:49.004 ************************************ 00:18:49.004 START TEST nvmf_nsid 00:18:49.004 ************************************ 00:18:49.004 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:18:49.264 * Looking for test storage... 00:18:49.264 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:49.264 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:49.264 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:18:49.264 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:49.264 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:49.264 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:49.264 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:49.264 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:49.264 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:18:49.264 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:18:49.264 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:18:49.264 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:18:49.264 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:18:49.264 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:18:49.264 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:18:49.264 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:49.264 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:18:49.264 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:18:49.264 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:49.264 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:49.264 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:18:49.264 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:18:49.264 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:49.264 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:18:49.264 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:18:49.264 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:18:49.264 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:18:49.264 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:49.264 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:18:49.264 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:49.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.265 --rc genhtml_branch_coverage=1 00:18:49.265 --rc genhtml_function_coverage=1 00:18:49.265 --rc genhtml_legend=1 00:18:49.265 --rc geninfo_all_blocks=1 00:18:49.265 --rc geninfo_unexecuted_blocks=1 00:18:49.265 00:18:49.265 ' 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:49.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.265 --rc genhtml_branch_coverage=1 00:18:49.265 --rc genhtml_function_coverage=1 00:18:49.265 --rc genhtml_legend=1 00:18:49.265 --rc geninfo_all_blocks=1 00:18:49.265 --rc geninfo_unexecuted_blocks=1 00:18:49.265 00:18:49.265 ' 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:49.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.265 --rc genhtml_branch_coverage=1 00:18:49.265 --rc genhtml_function_coverage=1 00:18:49.265 --rc genhtml_legend=1 00:18:49.265 --rc geninfo_all_blocks=1 00:18:49.265 --rc geninfo_unexecuted_blocks=1 00:18:49.265 00:18:49.265 ' 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:49.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.265 --rc genhtml_branch_coverage=1 00:18:49.265 --rc genhtml_function_coverage=1 00:18:49.265 --rc genhtml_legend=1 00:18:49.265 --rc geninfo_all_blocks=1 00:18:49.265 --rc geninfo_unexecuted_blocks=1 00:18:49.265 00:18:49.265 ' 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:49.265 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:49.265 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:49.266 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:49.266 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:49.266 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:49.266 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:49.266 Cannot find device "nvmf_init_br" 00:18:49.266 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:18:49.266 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:49.266 Cannot find device "nvmf_init_br2" 00:18:49.266 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:18:49.266 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:49.266 Cannot find device "nvmf_tgt_br" 00:18:49.266 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:18:49.266 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:49.525 Cannot find device "nvmf_tgt_br2" 00:18:49.525 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:18:49.525 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:49.525 Cannot find device "nvmf_init_br" 00:18:49.525 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:18:49.525 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:49.525 Cannot find device "nvmf_init_br2" 00:18:49.525 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:18:49.525 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:49.525 Cannot find device "nvmf_tgt_br" 00:18:49.525 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:18:49.525 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:49.525 Cannot find device "nvmf_tgt_br2" 00:18:49.525 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:18:49.525 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:49.525 Cannot find device "nvmf_br" 00:18:49.525 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:18:49.525 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:49.525 Cannot find device "nvmf_init_if" 00:18:49.525 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:18:49.525 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:49.525 Cannot find device "nvmf_init_if2" 00:18:49.525 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:18:49.525 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:49.525 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:49.525 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:18:49.525 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:49.525 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:49.525 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:18:49.525 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:49.525 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:49.525 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:49.525 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:49.525 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:49.525 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:49.525 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:49.525 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:49.784 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:49.785 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:49.785 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:49.785 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:49.785 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:49.785 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:49.785 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:49.785 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:49.785 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:49.785 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:49.785 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:49.785 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:49.785 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:49.785 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:49.785 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:49.785 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:49.785 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:49.785 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:49.785 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:49.785 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:49.785 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:49.785 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:49.785 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:49.785 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:49.785 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:49.785 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:49.785 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:18:49.785 00:18:49.785 --- 10.0.0.3 ping statistics --- 00:18:49.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.785 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:18:49.785 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:49.785 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:49.785 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.088 ms 00:18:49.785 00:18:49.785 --- 10.0.0.4 ping statistics --- 00:18:49.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.785 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:18:49.785 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:49.785 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:49.785 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:18:49.785 00:18:49.785 --- 10.0.0.1 ping statistics --- 00:18:49.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.785 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:18:49.785 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:49.785 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:49.785 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:18:49.785 00:18:49.785 --- 10.0.0.2 ping statistics --- 00:18:49.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.785 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:18:49.785 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:49.785 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:18:49.785 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:49.785 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:49.785 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:49.785 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:49.785 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:49.785 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:49.785 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:49.785 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:18:49.785 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:49.785 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:49.785 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:18:50.045 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=73431 00:18:50.045 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:18:50.045 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 73431 00:18:50.045 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 73431 ']' 00:18:50.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:50.045 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.045 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:50.045 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.045 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:50.045 09:29:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:18:50.045 [2024-12-09 09:29:27.564119] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:18:50.045 [2024-12-09 09:29:27.564187] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:50.045 [2024-12-09 09:29:27.717514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.045 [2024-12-09 09:29:27.756657] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:50.045 [2024-12-09 09:29:27.756715] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:50.045 [2024-12-09 09:29:27.756724] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:50.045 [2024-12-09 09:29:27.756732] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:50.045 [2024-12-09 09:29:27.756755] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:50.045 [2024-12-09 09:29:27.757017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.335 [2024-12-09 09:29:27.798414] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:50.948 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:50.948 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:18:50.948 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:50.948 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:50.948 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:18:50.948 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:50.948 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:50.948 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=73463 00:18:50.948 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:18:50.948 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:18:50.948 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:18:50.949 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:18:50.949 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:50.949 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:50.949 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:50.949 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:50.949 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:50.949 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:50.949 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:50.949 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:50.949 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:50.949 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:18:50.949 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:18:50.949 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=10ad1875-d2c5-4911-8822-511ba58c011e 00:18:50.949 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:18:50.949 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=da89ce20-2f89-41ac-b2de-ab84e18a2111 00:18:50.949 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:18:50.949 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=31520225-d607-4fe0-ae59-fba330fa7001 00:18:50.949 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:18:50.949 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.949 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:18:50.949 null0 00:18:50.949 [2024-12-09 09:29:28.542419] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:18:50.949 [2024-12-09 09:29:28.542649] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73463 ] 00:18:50.949 null1 00:18:50.949 null2 00:18:50.949 [2024-12-09 09:29:28.558471] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:50.949 [2024-12-09 09:29:28.582543] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:50.949 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.949 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 73463 /var/tmp/tgt2.sock 00:18:50.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:18:50.949 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 73463 ']' 00:18:50.949 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:18:50.949 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:50.949 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:18:50.949 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:50.949 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:18:51.207 [2024-12-09 09:29:28.692623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.207 [2024-12-09 09:29:28.740344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:51.207 [2024-12-09 09:29:28.795753] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:51.465 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:51.465 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:18:51.465 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:18:51.723 [2024-12-09 09:29:29.292714] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:51.723 [2024-12-09 09:29:29.308785] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:18:51.723 nvme0n1 nvme0n2 00:18:51.723 nvme1n1 00:18:51.723 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:18:51.723 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:18:51.723 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid=e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:18:51.982 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:18:51.982 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:18:51.982 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:18:51.982 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:18:51.982 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:18:51.982 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:18:51.982 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:18:51.982 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:18:51.982 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:51.982 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:51.982 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:18:51.982 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:18:51.982 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:18:52.921 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:52.921 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:52.921 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:52.921 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:52.921 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:18:52.921 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 10ad1875-d2c5-4911-8822-511ba58c011e 00:18:52.921 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:18:52.921 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:18:52.921 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:18:52.921 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:18:52.921 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:18:52.921 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=10ad1875d2c549118822511ba58c011e 00:18:52.921 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 10AD1875D2C549118822511BA58C011E 00:18:52.921 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 10AD1875D2C549118822511BA58C011E == \1\0\A\D\1\8\7\5\D\2\C\5\4\9\1\1\8\8\2\2\5\1\1\B\A\5\8\C\0\1\1\E ]] 00:18:52.921 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:18:52.921 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:18:52.921 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:18:52.921 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:53.179 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:53.179 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:18:53.179 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:18:53.179 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid da89ce20-2f89-41ac-b2de-ab84e18a2111 00:18:53.179 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:18:53.179 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:18:53.179 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:18:53.179 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:18:53.179 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:18:53.179 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=da89ce202f8941acb2deab84e18a2111 00:18:53.179 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo DA89CE202F8941ACB2DEAB84E18A2111 00:18:53.179 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ DA89CE202F8941ACB2DEAB84E18A2111 == \D\A\8\9\C\E\2\0\2\F\8\9\4\1\A\C\B\2\D\E\A\B\8\4\E\1\8\A\2\1\1\1 ]] 00:18:53.179 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:18:53.179 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:18:53.179 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:18:53.179 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:53.179 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:18:53.179 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:53.179 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:18:53.179 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 31520225-d607-4fe0-ae59-fba330fa7001 00:18:53.179 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:18:53.179 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:18:53.179 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:18:53.179 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:18:53.179 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:18:53.179 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=31520225d6074fe0ae59fba330fa7001 00:18:53.179 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 31520225D6074FE0AE59FBA330FA7001 00:18:53.179 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 31520225D6074FE0AE59FBA330FA7001 == \3\1\5\2\0\2\2\5\D\6\0\7\4\F\E\0\A\E\5\9\F\B\A\3\3\0\F\A\7\0\0\1 ]] 00:18:53.179 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:18:53.438 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:18:53.438 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:18:53.438 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 73463 00:18:53.438 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 73463 ']' 00:18:53.438 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 73463 00:18:53.438 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:18:53.438 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:53.438 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73463 00:18:53.438 killing process with pid 73463 00:18:53.438 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:53.438 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:53.438 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73463' 00:18:53.438 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 73463 00:18:53.438 09:29:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 73463 00:18:53.698 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:18:53.698 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:53.698 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:18:53.698 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:53.698 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:18:53.698 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:53.698 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:53.698 rmmod nvme_tcp 00:18:53.698 rmmod nvme_fabrics 00:18:53.698 rmmod nvme_keyring 00:18:53.957 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:53.957 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:18:53.957 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:18:53.957 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 73431 ']' 00:18:53.957 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 73431 00:18:53.957 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 73431 ']' 00:18:53.957 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 73431 00:18:53.957 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:18:53.957 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:53.957 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73431 00:18:53.957 killing process with pid 73431 00:18:53.957 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:53.957 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:53.957 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73431' 00:18:53.957 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 73431 00:18:53.957 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 73431 00:18:53.957 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:53.957 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:53.957 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:53.957 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:18:53.957 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:18:53.957 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:53.957 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:18:53.957 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:53.957 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:53.957 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:54.216 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:54.216 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:54.216 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:54.216 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:54.216 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:54.216 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:54.216 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:54.216 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:54.216 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:54.216 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:54.216 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:54.216 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:54.216 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:54.216 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:54.216 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:54.216 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:54.475 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:18:54.475 00:18:54.475 real 0m5.313s 00:18:54.475 user 0m6.855s 00:18:54.475 sys 0m2.225s 00:18:54.475 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:54.475 ************************************ 00:18:54.475 END TEST nvmf_nsid 00:18:54.475 ************************************ 00:18:54.475 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:18:54.475 09:29:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:18:54.475 ************************************ 00:18:54.475 END TEST nvmf_target_extra 00:18:54.475 ************************************ 00:18:54.475 00:18:54.475 real 4m41.891s 00:18:54.475 user 9m7.157s 00:18:54.475 sys 1m22.479s 00:18:54.475 09:29:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:54.475 09:29:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:54.475 09:29:32 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:18:54.475 09:29:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:54.475 09:29:32 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:54.475 09:29:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:54.475 ************************************ 00:18:54.475 START TEST nvmf_host 00:18:54.475 ************************************ 00:18:54.475 09:29:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:18:54.475 * Looking for test storage... 00:18:54.735 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:18:54.735 09:29:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:54.735 09:29:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:18:54.735 09:29:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:54.735 09:29:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:54.735 09:29:32 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:54.735 09:29:32 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:54.735 09:29:32 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:54.735 09:29:32 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:18:54.735 09:29:32 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:18:54.735 09:29:32 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:18:54.735 09:29:32 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:18:54.735 09:29:32 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:18:54.735 09:29:32 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:18:54.735 09:29:32 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:18:54.735 09:29:32 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:54.735 09:29:32 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:18:54.735 09:29:32 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:18:54.735 09:29:32 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:54.735 09:29:32 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:54.735 09:29:32 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:18:54.735 09:29:32 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:18:54.735 09:29:32 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:54.735 09:29:32 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:18:54.735 09:29:32 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:18:54.735 09:29:32 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:18:54.735 09:29:32 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:18:54.735 09:29:32 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:54.735 09:29:32 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:18:54.735 09:29:32 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:18:54.735 09:29:32 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:54.735 09:29:32 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:54.735 09:29:32 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:18:54.735 09:29:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:54.735 09:29:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:54.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.735 --rc genhtml_branch_coverage=1 00:18:54.735 --rc genhtml_function_coverage=1 00:18:54.735 --rc genhtml_legend=1 00:18:54.735 --rc geninfo_all_blocks=1 00:18:54.735 --rc geninfo_unexecuted_blocks=1 00:18:54.735 00:18:54.735 ' 00:18:54.735 09:29:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:54.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.735 --rc genhtml_branch_coverage=1 00:18:54.735 --rc genhtml_function_coverage=1 00:18:54.735 --rc genhtml_legend=1 00:18:54.735 --rc geninfo_all_blocks=1 00:18:54.735 --rc geninfo_unexecuted_blocks=1 00:18:54.735 00:18:54.735 ' 00:18:54.735 09:29:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:54.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.735 --rc genhtml_branch_coverage=1 00:18:54.735 --rc genhtml_function_coverage=1 00:18:54.735 --rc genhtml_legend=1 00:18:54.735 --rc geninfo_all_blocks=1 00:18:54.735 --rc geninfo_unexecuted_blocks=1 00:18:54.735 00:18:54.735 ' 00:18:54.735 09:29:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:54.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.735 --rc genhtml_branch_coverage=1 00:18:54.735 --rc genhtml_function_coverage=1 00:18:54.735 --rc genhtml_legend=1 00:18:54.735 --rc geninfo_all_blocks=1 00:18:54.735 --rc geninfo_unexecuted_blocks=1 00:18:54.735 00:18:54.735 ' 00:18:54.735 09:29:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:54.735 09:29:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:18:54.735 09:29:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:54.735 09:29:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:54.735 09:29:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:54.735 09:29:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:54.735 09:29:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:54.735 09:29:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:54.735 09:29:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:54.735 09:29:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:54.735 09:29:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:54.735 09:29:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:54.735 09:29:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:18:54.735 09:29:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:18:54.735 09:29:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:54.735 09:29:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:54.736 09:29:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:54.736 09:29:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:54.736 09:29:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:54.736 09:29:32 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:54.736 09:29:32 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:54.736 09:29:32 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:54.736 09:29:32 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:54.736 09:29:32 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.736 09:29:32 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.736 09:29:32 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.736 09:29:32 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:18:54.736 09:29:32 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.736 09:29:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:18:54.736 09:29:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:54.736 09:29:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:54.736 09:29:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:54.736 09:29:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:54.736 09:29:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:54.736 09:29:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:54.736 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:54.736 09:29:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:54.736 09:29:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:54.736 09:29:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:54.736 09:29:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:18:54.736 09:29:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:18:54.736 09:29:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:18:54.736 09:29:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:18:54.736 09:29:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:54.736 09:29:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:54.736 09:29:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.736 ************************************ 00:18:54.736 START TEST nvmf_identify 00:18:54.736 ************************************ 00:18:54.736 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:18:54.996 * Looking for test storage... 00:18:54.996 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:54.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.996 --rc genhtml_branch_coverage=1 00:18:54.996 --rc genhtml_function_coverage=1 00:18:54.996 --rc genhtml_legend=1 00:18:54.996 --rc geninfo_all_blocks=1 00:18:54.996 --rc geninfo_unexecuted_blocks=1 00:18:54.996 00:18:54.996 ' 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:54.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.996 --rc genhtml_branch_coverage=1 00:18:54.996 --rc genhtml_function_coverage=1 00:18:54.996 --rc genhtml_legend=1 00:18:54.996 --rc geninfo_all_blocks=1 00:18:54.996 --rc geninfo_unexecuted_blocks=1 00:18:54.996 00:18:54.996 ' 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:54.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.996 --rc genhtml_branch_coverage=1 00:18:54.996 --rc genhtml_function_coverage=1 00:18:54.996 --rc genhtml_legend=1 00:18:54.996 --rc geninfo_all_blocks=1 00:18:54.996 --rc geninfo_unexecuted_blocks=1 00:18:54.996 00:18:54.996 ' 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:54.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.996 --rc genhtml_branch_coverage=1 00:18:54.996 --rc genhtml_function_coverage=1 00:18:54.996 --rc genhtml_legend=1 00:18:54.996 --rc geninfo_all_blocks=1 00:18:54.996 --rc geninfo_unexecuted_blocks=1 00:18:54.996 00:18:54.996 ' 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.996 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:18:54.997 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.997 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:18:54.997 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:54.997 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:54.997 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:54.997 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:54.997 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:54.997 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:54.997 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:54.997 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:54.997 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:54.997 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:54.997 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:54.997 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:54.997 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:18:54.997 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:54.997 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:54.997 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:54.997 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:54.997 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:54.997 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:54.997 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:54.997 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:54.997 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:54.997 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:54.997 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:54.997 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:54.997 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:54.997 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:54.997 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:54.997 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:54.997 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:54.997 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:54.997 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:54.997 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:54.997 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:54.997 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:54.997 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:54.997 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:54.997 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:54.997 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:54.997 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:54.997 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:54.997 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:54.997 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:54.997 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:54.997 Cannot find device "nvmf_init_br" 00:18:54.997 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:18:54.997 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:54.997 Cannot find device "nvmf_init_br2" 00:18:54.997 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:18:54.997 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:54.997 Cannot find device "nvmf_tgt_br" 00:18:54.997 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:18:54.997 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:54.997 Cannot find device "nvmf_tgt_br2" 00:18:54.997 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:18:54.997 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:55.256 Cannot find device "nvmf_init_br" 00:18:55.256 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:18:55.256 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:55.257 Cannot find device "nvmf_init_br2" 00:18:55.257 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:18:55.257 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:55.257 Cannot find device "nvmf_tgt_br" 00:18:55.257 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:18:55.257 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:55.257 Cannot find device "nvmf_tgt_br2" 00:18:55.257 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:18:55.257 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:55.257 Cannot find device "nvmf_br" 00:18:55.257 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:18:55.257 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:55.257 Cannot find device "nvmf_init_if" 00:18:55.257 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:18:55.257 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:55.257 Cannot find device "nvmf_init_if2" 00:18:55.257 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:18:55.257 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:55.257 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:55.257 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:18:55.257 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:55.257 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:55.257 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:18:55.257 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:55.257 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:55.257 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:55.257 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:55.257 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:55.257 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:55.257 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:55.257 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:55.257 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:55.257 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:55.257 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:55.517 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:55.517 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:55.517 09:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:55.517 09:29:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:55.517 09:29:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:55.517 09:29:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:55.517 09:29:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:55.517 09:29:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:55.517 09:29:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:55.517 09:29:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:55.517 09:29:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:55.517 09:29:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:55.517 09:29:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:55.517 09:29:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:55.517 09:29:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:55.517 09:29:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:55.517 09:29:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:55.517 09:29:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:55.517 09:29:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:55.517 09:29:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:55.517 09:29:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:55.517 09:29:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:55.517 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:55.517 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.106 ms 00:18:55.517 00:18:55.517 --- 10.0.0.3 ping statistics --- 00:18:55.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:55.517 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:18:55.517 09:29:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:55.517 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:55.517 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.085 ms 00:18:55.517 00:18:55.517 --- 10.0.0.4 ping statistics --- 00:18:55.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:55.517 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:18:55.517 09:29:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:55.517 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:55.517 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:18:55.517 00:18:55.517 --- 10.0.0.1 ping statistics --- 00:18:55.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:55.517 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:18:55.517 09:29:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:55.517 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:55.517 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:18:55.517 00:18:55.517 --- 10.0.0.2 ping statistics --- 00:18:55.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:55.517 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:18:55.517 09:29:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:55.517 09:29:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:18:55.517 09:29:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:55.517 09:29:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:55.517 09:29:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:55.517 09:29:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:55.517 09:29:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:55.517 09:29:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:55.518 09:29:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:55.518 09:29:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:18:55.518 09:29:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:55.518 09:29:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:55.518 09:29:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=73819 00:18:55.518 09:29:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:55.518 09:29:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:55.518 09:29:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 73819 00:18:55.518 09:29:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 73819 ']' 00:18:55.518 09:29:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:55.518 09:29:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:55.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:55.518 09:29:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:55.518 09:29:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:55.518 09:29:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:55.777 [2024-12-09 09:29:33.275578] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:18:55.777 [2024-12-09 09:29:33.275644] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:55.777 [2024-12-09 09:29:33.431595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:55.777 [2024-12-09 09:29:33.480109] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:55.777 [2024-12-09 09:29:33.480161] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:55.777 [2024-12-09 09:29:33.480171] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:55.777 [2024-12-09 09:29:33.480180] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:55.777 [2024-12-09 09:29:33.480188] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:55.777 [2024-12-09 09:29:33.481079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:55.777 [2024-12-09 09:29:33.481214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:55.777 [2024-12-09 09:29:33.481298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:55.777 [2024-12-09 09:29:33.481302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:56.036 [2024-12-09 09:29:33.524178] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:56.605 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:56.605 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:18:56.605 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:56.605 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.605 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:56.605 [2024-12-09 09:29:34.174615] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:56.605 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.605 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:18:56.605 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:56.605 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:56.605 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:56.605 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.605 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:56.605 Malloc0 00:18:56.605 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.605 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:56.605 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.605 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:56.605 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.605 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:18:56.605 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.605 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:56.605 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.605 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:56.605 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.605 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:56.605 [2024-12-09 09:29:34.302081] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:56.605 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.605 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:18:56.605 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.605 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:56.605 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.605 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:18:56.605 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.605 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:56.868 [ 00:18:56.868 { 00:18:56.868 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:56.868 "subtype": "Discovery", 00:18:56.868 "listen_addresses": [ 00:18:56.868 { 00:18:56.868 "trtype": "TCP", 00:18:56.868 "adrfam": "IPv4", 00:18:56.868 "traddr": "10.0.0.3", 00:18:56.868 "trsvcid": "4420" 00:18:56.868 } 00:18:56.868 ], 00:18:56.868 "allow_any_host": true, 00:18:56.868 "hosts": [] 00:18:56.868 }, 00:18:56.868 { 00:18:56.868 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:56.868 "subtype": "NVMe", 00:18:56.868 "listen_addresses": [ 00:18:56.868 { 00:18:56.868 "trtype": "TCP", 00:18:56.868 "adrfam": "IPv4", 00:18:56.868 "traddr": "10.0.0.3", 00:18:56.868 "trsvcid": "4420" 00:18:56.868 } 00:18:56.868 ], 00:18:56.868 "allow_any_host": true, 00:18:56.868 "hosts": [], 00:18:56.868 "serial_number": "SPDK00000000000001", 00:18:56.868 "model_number": "SPDK bdev Controller", 00:18:56.868 "max_namespaces": 32, 00:18:56.868 "min_cntlid": 1, 00:18:56.868 "max_cntlid": 65519, 00:18:56.868 "namespaces": [ 00:18:56.868 { 00:18:56.868 "nsid": 1, 00:18:56.868 "bdev_name": "Malloc0", 00:18:56.868 "name": "Malloc0", 00:18:56.868 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:18:56.868 "eui64": "ABCDEF0123456789", 00:18:56.868 "uuid": "be5c69be-0d57-4afb-800f-b6f2ce541b43" 00:18:56.868 } 00:18:56.868 ] 00:18:56.868 } 00:18:56.868 ] 00:18:56.868 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.868 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:18:56.868 [2024-12-09 09:29:34.370608] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:18:56.868 [2024-12-09 09:29:34.370691] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73854 ] 00:18:56.868 [2024-12-09 09:29:34.530428] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:18:56.868 [2024-12-09 09:29:34.530496] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:18:56.868 [2024-12-09 09:29:34.530502] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:18:56.868 [2024-12-09 09:29:34.530517] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:18:56.868 [2024-12-09 09:29:34.530531] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:18:56.868 [2024-12-09 09:29:34.530821] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:18:56.868 [2024-12-09 09:29:34.530870] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1ff6750 0 00:18:56.868 [2024-12-09 09:29:34.537481] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:18:56.868 [2024-12-09 09:29:34.537504] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:18:56.868 [2024-12-09 09:29:34.537509] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:18:56.868 [2024-12-09 09:29:34.537513] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:18:56.868 [2024-12-09 09:29:34.537546] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:56.868 [2024-12-09 09:29:34.537552] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:56.868 [2024-12-09 09:29:34.537557] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ff6750) 00:18:56.868 [2024-12-09 09:29:34.537569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:18:56.868 [2024-12-09 09:29:34.537597] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205a740, cid 0, qid 0 00:18:56.868 [2024-12-09 09:29:34.545478] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:56.869 [2024-12-09 09:29:34.545498] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:56.869 [2024-12-09 09:29:34.545503] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:56.869 [2024-12-09 09:29:34.545508] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205a740) on tqpair=0x1ff6750 00:18:56.869 [2024-12-09 09:29:34.545518] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:18:56.869 [2024-12-09 09:29:34.545526] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:18:56.869 [2024-12-09 09:29:34.545533] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:18:56.869 [2024-12-09 09:29:34.545557] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:56.869 [2024-12-09 09:29:34.545564] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:56.869 [2024-12-09 09:29:34.545569] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ff6750) 00:18:56.869 [2024-12-09 09:29:34.545579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.869 [2024-12-09 09:29:34.545604] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205a740, cid 0, qid 0 00:18:56.869 [2024-12-09 09:29:34.545657] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:56.869 [2024-12-09 09:29:34.545664] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:56.869 [2024-12-09 09:29:34.545669] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:56.869 [2024-12-09 09:29:34.545675] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205a740) on tqpair=0x1ff6750 00:18:56.869 [2024-12-09 09:29:34.545682] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:18:56.869 [2024-12-09 09:29:34.545691] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:18:56.869 [2024-12-09 09:29:34.545699] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:56.869 [2024-12-09 09:29:34.545704] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:56.869 [2024-12-09 09:29:34.545710] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ff6750) 00:18:56.869 [2024-12-09 09:29:34.545718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.869 [2024-12-09 09:29:34.545734] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205a740, cid 0, qid 0 00:18:56.869 [2024-12-09 09:29:34.545778] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:56.869 [2024-12-09 09:29:34.545786] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:56.869 [2024-12-09 09:29:34.545791] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:56.869 [2024-12-09 09:29:34.545796] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205a740) on tqpair=0x1ff6750 00:18:56.869 [2024-12-09 09:29:34.545803] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:18:56.869 [2024-12-09 09:29:34.545813] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:18:56.869 [2024-12-09 09:29:34.545821] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:56.869 [2024-12-09 09:29:34.545826] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:56.869 [2024-12-09 09:29:34.545831] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ff6750) 00:18:56.869 [2024-12-09 09:29:34.545839] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.869 [2024-12-09 09:29:34.545854] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205a740, cid 0, qid 0 00:18:56.869 [2024-12-09 09:29:34.545896] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:56.869 [2024-12-09 09:29:34.545903] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:56.869 [2024-12-09 09:29:34.545908] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:56.869 [2024-12-09 09:29:34.545913] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205a740) on tqpair=0x1ff6750 00:18:56.869 [2024-12-09 09:29:34.545920] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:56.869 [2024-12-09 09:29:34.545931] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:56.869 [2024-12-09 09:29:34.545936] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:56.869 [2024-12-09 09:29:34.545941] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ff6750) 00:18:56.869 [2024-12-09 09:29:34.545949] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.869 [2024-12-09 09:29:34.545964] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205a740, cid 0, qid 0 00:18:56.869 [2024-12-09 09:29:34.546000] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:56.869 [2024-12-09 09:29:34.546007] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:56.869 [2024-12-09 09:29:34.546013] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:56.869 [2024-12-09 09:29:34.546018] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205a740) on tqpair=0x1ff6750 00:18:56.869 [2024-12-09 09:29:34.546024] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:18:56.869 [2024-12-09 09:29:34.546031] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:18:56.869 [2024-12-09 09:29:34.546040] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:56.869 [2024-12-09 09:29:34.546147] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:18:56.869 [2024-12-09 09:29:34.546154] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:56.869 [2024-12-09 09:29:34.546162] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:56.869 [2024-12-09 09:29:34.546167] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:56.869 [2024-12-09 09:29:34.546171] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ff6750) 00:18:56.869 [2024-12-09 09:29:34.546178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.869 [2024-12-09 09:29:34.546192] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205a740, cid 0, qid 0 00:18:56.869 [2024-12-09 09:29:34.546229] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:56.869 [2024-12-09 09:29:34.546236] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:56.869 [2024-12-09 09:29:34.546240] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:56.869 [2024-12-09 09:29:34.546244] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205a740) on tqpair=0x1ff6750 00:18:56.869 [2024-12-09 09:29:34.546249] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:56.869 [2024-12-09 09:29:34.546258] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:56.869 [2024-12-09 09:29:34.546263] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:56.869 [2024-12-09 09:29:34.546267] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ff6750) 00:18:56.869 [2024-12-09 09:29:34.546273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.869 [2024-12-09 09:29:34.546286] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205a740, cid 0, qid 0 00:18:56.869 [2024-12-09 09:29:34.546329] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:56.869 [2024-12-09 09:29:34.546335] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:56.869 [2024-12-09 09:29:34.546339] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:56.869 [2024-12-09 09:29:34.546343] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205a740) on tqpair=0x1ff6750 00:18:56.869 [2024-12-09 09:29:34.546348] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:56.869 [2024-12-09 09:29:34.546354] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:18:56.869 [2024-12-09 09:29:34.546362] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:18:56.869 [2024-12-09 09:29:34.546372] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:18:56.869 [2024-12-09 09:29:34.546382] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:56.869 [2024-12-09 09:29:34.546386] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ff6750) 00:18:56.869 [2024-12-09 09:29:34.546393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.869 [2024-12-09 09:29:34.546407] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205a740, cid 0, qid 0 00:18:56.869 [2024-12-09 09:29:34.546495] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:56.869 [2024-12-09 09:29:34.546501] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:56.869 [2024-12-09 09:29:34.546506] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:56.869 [2024-12-09 09:29:34.546510] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ff6750): datao=0, datal=4096, cccid=0 00:18:56.869 [2024-12-09 09:29:34.546516] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x205a740) on tqpair(0x1ff6750): expected_datao=0, payload_size=4096 00:18:56.869 [2024-12-09 09:29:34.546521] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:56.869 [2024-12-09 09:29:34.546528] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:56.869 [2024-12-09 09:29:34.546533] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:56.869 [2024-12-09 09:29:34.546541] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:56.870 [2024-12-09 09:29:34.546547] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:56.870 [2024-12-09 09:29:34.546551] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:56.870 [2024-12-09 09:29:34.546556] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205a740) on tqpair=0x1ff6750 00:18:56.870 [2024-12-09 09:29:34.546564] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:18:56.870 [2024-12-09 09:29:34.546570] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:18:56.870 [2024-12-09 09:29:34.546575] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:18:56.870 [2024-12-09 09:29:34.546581] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:18:56.870 [2024-12-09 09:29:34.546589] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:18:56.870 [2024-12-09 09:29:34.546595] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:18:56.870 [2024-12-09 09:29:34.546604] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:18:56.870 [2024-12-09 09:29:34.546611] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:56.870 [2024-12-09 09:29:34.546616] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:56.870 [2024-12-09 09:29:34.546620] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ff6750) 00:18:56.870 [2024-12-09 09:29:34.546631] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:56.870 [2024-12-09 09:29:34.546652] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205a740, cid 0, qid 0 00:18:56.870 [2024-12-09 09:29:34.546698] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:56.870 [2024-12-09 09:29:34.546705] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:56.870 [2024-12-09 09:29:34.546709] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:56.870 [2024-12-09 09:29:34.546713] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205a740) on tqpair=0x1ff6750 00:18:56.870 [2024-12-09 09:29:34.546724] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:56.870 [2024-12-09 09:29:34.546729] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:56.870 [2024-12-09 09:29:34.546733] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ff6750) 00:18:56.870 [2024-12-09 09:29:34.546739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:56.870 [2024-12-09 09:29:34.546745] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:56.870 [2024-12-09 09:29:34.546750] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:56.870 [2024-12-09 09:29:34.546754] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1ff6750) 00:18:56.870 [2024-12-09 09:29:34.546760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:56.870 [2024-12-09 09:29:34.546766] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:56.870 [2024-12-09 09:29:34.546770] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:56.870 [2024-12-09 09:29:34.546774] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1ff6750) 00:18:56.870 [2024-12-09 09:29:34.546780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:56.870 [2024-12-09 09:29:34.546786] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:56.870 [2024-12-09 09:29:34.546790] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:56.870 [2024-12-09 09:29:34.546794] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ff6750) 00:18:56.870 [2024-12-09 09:29:34.546800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:56.870 [2024-12-09 09:29:34.546806] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:18:56.870 [2024-12-09 09:29:34.546814] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:56.870 [2024-12-09 09:29:34.546821] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:56.870 [2024-12-09 09:29:34.546825] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ff6750) 00:18:56.870 [2024-12-09 09:29:34.546832] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.870 [2024-12-09 09:29:34.546847] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205a740, cid 0, qid 0 00:18:56.870 [2024-12-09 09:29:34.546853] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205a8c0, cid 1, qid 0 00:18:56.870 [2024-12-09 09:29:34.546858] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205aa40, cid 2, qid 0 00:18:56.870 [2024-12-09 09:29:34.546863] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205abc0, cid 3, qid 0 00:18:56.870 [2024-12-09 09:29:34.546868] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205ad40, cid 4, qid 0 00:18:56.870 [2024-12-09 09:29:34.546933] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:56.870 [2024-12-09 09:29:34.546940] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:56.870 [2024-12-09 09:29:34.546944] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:56.870 [2024-12-09 09:29:34.546948] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205ad40) on tqpair=0x1ff6750 00:18:56.870 [2024-12-09 09:29:34.546956] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:18:56.870 [2024-12-09 09:29:34.546962] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:18:56.870 [2024-12-09 09:29:34.546972] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:56.870 [2024-12-09 09:29:34.546977] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ff6750) 00:18:56.870 [2024-12-09 09:29:34.546983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.870 [2024-12-09 09:29:34.546997] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205ad40, cid 4, qid 0 00:18:56.870 [2024-12-09 09:29:34.547052] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:56.870 [2024-12-09 09:29:34.547058] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:56.870 [2024-12-09 09:29:34.547062] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:56.870 [2024-12-09 09:29:34.547066] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ff6750): datao=0, datal=4096, cccid=4 00:18:56.870 [2024-12-09 09:29:34.547071] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x205ad40) on tqpair(0x1ff6750): expected_datao=0, payload_size=4096 00:18:56.870 [2024-12-09 09:29:34.547076] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:56.870 [2024-12-09 09:29:34.547083] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:56.870 [2024-12-09 09:29:34.547087] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:56.870 [2024-12-09 09:29:34.547095] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:56.870 [2024-12-09 09:29:34.547101] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:56.870 [2024-12-09 09:29:34.547105] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:56.870 [2024-12-09 09:29:34.547109] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205ad40) on tqpair=0x1ff6750 00:18:56.870 [2024-12-09 09:29:34.547121] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:18:56.870 [2024-12-09 09:29:34.547147] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:56.870 [2024-12-09 09:29:34.547152] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ff6750) 00:18:56.870 [2024-12-09 09:29:34.547158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.870 [2024-12-09 09:29:34.547165] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:56.870 [2024-12-09 09:29:34.547169] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:56.870 [2024-12-09 09:29:34.547173] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ff6750) 00:18:56.870 [2024-12-09 09:29:34.547179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:18:56.870 [2024-12-09 09:29:34.547198] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205ad40, cid 4, qid 0 00:18:56.870 [2024-12-09 09:29:34.547204] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205aec0, cid 5, qid 0 00:18:56.870 [2024-12-09 09:29:34.547291] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:56.870 [2024-12-09 09:29:34.547297] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:56.870 [2024-12-09 09:29:34.547301] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:56.870 [2024-12-09 09:29:34.547306] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ff6750): datao=0, datal=1024, cccid=4 00:18:56.870 [2024-12-09 09:29:34.547311] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x205ad40) on tqpair(0x1ff6750): expected_datao=0, payload_size=1024 00:18:56.870 [2024-12-09 09:29:34.547316] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:56.870 [2024-12-09 09:29:34.547322] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:56.870 [2024-12-09 09:29:34.547326] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:56.870 [2024-12-09 09:29:34.547332] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:56.870 [2024-12-09 09:29:34.547338] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:56.870 [2024-12-09 09:29:34.547342] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:56.870 [2024-12-09 09:29:34.547346] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205aec0) on tqpair=0x1ff6750 00:18:56.870 [2024-12-09 09:29:34.547361] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:56.870 [2024-12-09 09:29:34.547367] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:56.870 [2024-12-09 09:29:34.547371] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:56.870 [2024-12-09 09:29:34.547375] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205ad40) on tqpair=0x1ff6750 00:18:56.870 [2024-12-09 09:29:34.547385] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:56.871 [2024-12-09 09:29:34.547390] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ff6750) 00:18:56.871 [2024-12-09 09:29:34.547396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.871 [2024-12-09 09:29:34.547414] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205ad40, cid 4, qid 0 00:18:56.871 [2024-12-09 09:29:34.547481] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:56.871 [2024-12-09 09:29:34.547488] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:56.871 [2024-12-09 09:29:34.547492] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:56.871 [2024-12-09 09:29:34.547497] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ff6750): datao=0, datal=3072, cccid=4 00:18:56.871 [2024-12-09 09:29:34.547502] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x205ad40) on tqpair(0x1ff6750): expected_datao=0, payload_size=3072 00:18:56.871 [2024-12-09 09:29:34.547507] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:56.871 [2024-12-09 09:29:34.547514] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:56.871 [2024-12-09 09:29:34.547518] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:56.871 [2024-12-09 09:29:34.547526] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:56.871 [2024-12-09 09:29:34.547532] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:56.871 [2024-12-09 09:29:34.547536] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:56.871 [2024-12-09 09:29:34.547540] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205ad40) on tqpair=0x1ff6750 00:18:56.871 [2024-12-09 09:29:34.547549] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:56.871 [2024-12-09 09:29:34.547553] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ff6750) 00:18:56.871 [2024-12-09 09:29:34.547559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.871 [2024-12-09 09:29:34.547578] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205ad40, cid 4, qid 0 00:18:56.871 [2024-12-09 09:29:34.547629] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:56.871 [2024-12-09 09:29:34.547635] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:56.871 [2024-12-09 09:29:34.547639] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:56.871 [2024-12-09 09:29:34.547643] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ff6750): datao=0, datal=8, cccid=4 00:18:56.871 [2024-12-09 09:29:34.547648] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x205ad40) on tqpair(0x1ff6750): expected_datao=0, payload_size=8 00:18:56.871 [2024-12-09 09:29:34.547653] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:56.871 [2024-12-09 09:29:34.547659] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:56.871 [2024-12-09 09:29:34.547663] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:56.871 [2024-12-09 09:29:34.547676] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:56.871 [2024-12-09 09:29:34.547682] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:56.871 [2024-12-09 09:29:34.547686] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:56.871 [2024-12-09 09:29:34.547690] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205ad40) on tqpair=0x1ff6750 00:18:56.871 ===================================================== 00:18:56.871 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:18:56.871 ===================================================== 00:18:56.871 Controller Capabilities/Features 00:18:56.871 ================================ 00:18:56.871 Vendor ID: 0000 00:18:56.871 Subsystem Vendor ID: 0000 00:18:56.871 Serial Number: .................... 00:18:56.871 Model Number: ........................................ 00:18:56.871 Firmware Version: 25.01 00:18:56.871 Recommended Arb Burst: 0 00:18:56.871 IEEE OUI Identifier: 00 00 00 00:18:56.871 Multi-path I/O 00:18:56.871 May have multiple subsystem ports: No 00:18:56.871 May have multiple controllers: No 00:18:56.871 Associated with SR-IOV VF: No 00:18:56.871 Max Data Transfer Size: 131072 00:18:56.871 Max Number of Namespaces: 0 00:18:56.871 Max Number of I/O Queues: 1024 00:18:56.871 NVMe Specification Version (VS): 1.3 00:18:56.871 NVMe Specification Version (Identify): 1.3 00:18:56.871 Maximum Queue Entries: 128 00:18:56.871 Contiguous Queues Required: Yes 00:18:56.871 Arbitration Mechanisms Supported 00:18:56.871 Weighted Round Robin: Not Supported 00:18:56.871 Vendor Specific: Not Supported 00:18:56.871 Reset Timeout: 15000 ms 00:18:56.871 Doorbell Stride: 4 bytes 00:18:56.871 NVM Subsystem Reset: Not Supported 00:18:56.871 Command Sets Supported 00:18:56.871 NVM Command Set: Supported 00:18:56.871 Boot Partition: Not Supported 00:18:56.871 Memory Page Size Minimum: 4096 bytes 00:18:56.871 Memory Page Size Maximum: 4096 bytes 00:18:56.871 Persistent Memory Region: Not Supported 00:18:56.871 Optional Asynchronous Events Supported 00:18:56.871 Namespace Attribute Notices: Not Supported 00:18:56.871 Firmware Activation Notices: Not Supported 00:18:56.871 ANA Change Notices: Not Supported 00:18:56.871 PLE Aggregate Log Change Notices: Not Supported 00:18:56.871 LBA Status Info Alert Notices: Not Supported 00:18:56.871 EGE Aggregate Log Change Notices: Not Supported 00:18:56.871 Normal NVM Subsystem Shutdown event: Not Supported 00:18:56.871 Zone Descriptor Change Notices: Not Supported 00:18:56.871 Discovery Log Change Notices: Supported 00:18:56.871 Controller Attributes 00:18:56.871 128-bit Host Identifier: Not Supported 00:18:56.871 Non-Operational Permissive Mode: Not Supported 00:18:56.871 NVM Sets: Not Supported 00:18:56.871 Read Recovery Levels: Not Supported 00:18:56.871 Endurance Groups: Not Supported 00:18:56.871 Predictable Latency Mode: Not Supported 00:18:56.871 Traffic Based Keep ALive: Not Supported 00:18:56.871 Namespace Granularity: Not Supported 00:18:56.871 SQ Associations: Not Supported 00:18:56.871 UUID List: Not Supported 00:18:56.871 Multi-Domain Subsystem: Not Supported 00:18:56.871 Fixed Capacity Management: Not Supported 00:18:56.871 Variable Capacity Management: Not Supported 00:18:56.871 Delete Endurance Group: Not Supported 00:18:56.871 Delete NVM Set: Not Supported 00:18:56.871 Extended LBA Formats Supported: Not Supported 00:18:56.871 Flexible Data Placement Supported: Not Supported 00:18:56.871 00:18:56.871 Controller Memory Buffer Support 00:18:56.871 ================================ 00:18:56.871 Supported: No 00:18:56.871 00:18:56.871 Persistent Memory Region Support 00:18:56.871 ================================ 00:18:56.871 Supported: No 00:18:56.871 00:18:56.871 Admin Command Set Attributes 00:18:56.871 ============================ 00:18:56.871 Security Send/Receive: Not Supported 00:18:56.871 Format NVM: Not Supported 00:18:56.871 Firmware Activate/Download: Not Supported 00:18:56.871 Namespace Management: Not Supported 00:18:56.871 Device Self-Test: Not Supported 00:18:56.871 Directives: Not Supported 00:18:56.871 NVMe-MI: Not Supported 00:18:56.871 Virtualization Management: Not Supported 00:18:56.871 Doorbell Buffer Config: Not Supported 00:18:56.871 Get LBA Status Capability: Not Supported 00:18:56.871 Command & Feature Lockdown Capability: Not Supported 00:18:56.871 Abort Command Limit: 1 00:18:56.871 Async Event Request Limit: 4 00:18:56.871 Number of Firmware Slots: N/A 00:18:56.871 Firmware Slot 1 Read-Only: N/A 00:18:56.871 Firmware Activation Without Reset: N/A 00:18:56.871 Multiple Update Detection Support: N/A 00:18:56.871 Firmware Update Granularity: No Information Provided 00:18:56.871 Per-Namespace SMART Log: No 00:18:56.871 Asymmetric Namespace Access Log Page: Not Supported 00:18:56.871 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:18:56.871 Command Effects Log Page: Not Supported 00:18:56.871 Get Log Page Extended Data: Supported 00:18:56.871 Telemetry Log Pages: Not Supported 00:18:56.871 Persistent Event Log Pages: Not Supported 00:18:56.871 Supported Log Pages Log Page: May Support 00:18:56.871 Commands Supported & Effects Log Page: Not Supported 00:18:56.871 Feature Identifiers & Effects Log Page:May Support 00:18:56.871 NVMe-MI Commands & Effects Log Page: May Support 00:18:56.871 Data Area 4 for Telemetry Log: Not Supported 00:18:56.871 Error Log Page Entries Supported: 128 00:18:56.871 Keep Alive: Not Supported 00:18:56.871 00:18:56.871 NVM Command Set Attributes 00:18:56.871 ========================== 00:18:56.871 Submission Queue Entry Size 00:18:56.871 Max: 1 00:18:56.871 Min: 1 00:18:56.871 Completion Queue Entry Size 00:18:56.871 Max: 1 00:18:56.872 Min: 1 00:18:56.872 Number of Namespaces: 0 00:18:56.872 Compare Command: Not Supported 00:18:56.872 Write Uncorrectable Command: Not Supported 00:18:56.872 Dataset Management Command: Not Supported 00:18:56.872 Write Zeroes Command: Not Supported 00:18:56.872 Set Features Save Field: Not Supported 00:18:56.872 Reservations: Not Supported 00:18:56.872 Timestamp: Not Supported 00:18:56.872 Copy: Not Supported 00:18:56.872 Volatile Write Cache: Not Present 00:18:56.872 Atomic Write Unit (Normal): 1 00:18:56.872 Atomic Write Unit (PFail): 1 00:18:56.872 Atomic Compare & Write Unit: 1 00:18:56.872 Fused Compare & Write: Supported 00:18:56.872 Scatter-Gather List 00:18:56.872 SGL Command Set: Supported 00:18:56.872 SGL Keyed: Supported 00:18:56.872 SGL Bit Bucket Descriptor: Not Supported 00:18:56.872 SGL Metadata Pointer: Not Supported 00:18:56.872 Oversized SGL: Not Supported 00:18:56.872 SGL Metadata Address: Not Supported 00:18:56.872 SGL Offset: Supported 00:18:56.872 Transport SGL Data Block: Not Supported 00:18:56.872 Replay Protected Memory Block: Not Supported 00:18:56.872 00:18:56.872 Firmware Slot Information 00:18:56.872 ========================= 00:18:56.872 Active slot: 0 00:18:56.872 00:18:56.872 00:18:56.872 Error Log 00:18:56.872 ========= 00:18:56.872 00:18:56.872 Active Namespaces 00:18:56.872 ================= 00:18:56.872 Discovery Log Page 00:18:56.872 ================== 00:18:56.872 Generation Counter: 2 00:18:56.872 Number of Records: 2 00:18:56.872 Record Format: 0 00:18:56.872 00:18:56.872 Discovery Log Entry 0 00:18:56.872 ---------------------- 00:18:56.872 Transport Type: 3 (TCP) 00:18:56.872 Address Family: 1 (IPv4) 00:18:56.872 Subsystem Type: 3 (Current Discovery Subsystem) 00:18:56.872 Entry Flags: 00:18:56.872 Duplicate Returned Information: 1 00:18:56.872 Explicit Persistent Connection Support for Discovery: 1 00:18:56.872 Transport Requirements: 00:18:56.872 Secure Channel: Not Required 00:18:56.872 Port ID: 0 (0x0000) 00:18:56.872 Controller ID: 65535 (0xffff) 00:18:56.872 Admin Max SQ Size: 128 00:18:56.872 Transport Service Identifier: 4420 00:18:56.872 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:18:56.872 Transport Address: 10.0.0.3 00:18:56.872 Discovery Log Entry 1 00:18:56.872 ---------------------- 00:18:56.872 Transport Type: 3 (TCP) 00:18:56.872 Address Family: 1 (IPv4) 00:18:56.872 Subsystem Type: 2 (NVM Subsystem) 00:18:56.872 Entry Flags: 00:18:56.872 Duplicate Returned Information: 0 00:18:56.872 Explicit Persistent Connection Support for Discovery: 0 00:18:56.872 Transport Requirements: 00:18:56.872 Secure Channel: Not Required 00:18:56.872 Port ID: 0 (0x0000) 00:18:56.872 Controller ID: 65535 (0xffff) 00:18:56.872 Admin Max SQ Size: 128 00:18:56.872 Transport Service Identifier: 4420 00:18:56.872 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:18:56.872 Transport Address: 10.0.0.3 [2024-12-09 09:29:34.547795] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:18:56.872 [2024-12-09 09:29:34.547808] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205a740) on tqpair=0x1ff6750 00:18:56.872 [2024-12-09 09:29:34.547815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.872 [2024-12-09 09:29:34.547821] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205a8c0) on tqpair=0x1ff6750 00:18:56.872 [2024-12-09 09:29:34.547826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.872 [2024-12-09 09:29:34.547832] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205aa40) on tqpair=0x1ff6750 00:18:56.872 [2024-12-09 09:29:34.547837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.872 [2024-12-09 09:29:34.547842] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205abc0) on tqpair=0x1ff6750 00:18:56.872 [2024-12-09 09:29:34.547847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.872 [2024-12-09 09:29:34.547856] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:56.872 [2024-12-09 09:29:34.547860] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:56.872 [2024-12-09 09:29:34.547864] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ff6750) 00:18:56.872 [2024-12-09 09:29:34.547871] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.872 [2024-12-09 09:29:34.547890] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205abc0, cid 3, qid 0 00:18:56.872 [2024-12-09 09:29:34.547935] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:56.872 [2024-12-09 09:29:34.547942] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:56.872 [2024-12-09 09:29:34.547946] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:56.872 [2024-12-09 09:29:34.547950] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205abc0) on tqpair=0x1ff6750 00:18:56.872 [2024-12-09 09:29:34.547960] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:56.872 [2024-12-09 09:29:34.547965] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:56.872 [2024-12-09 09:29:34.547969] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ff6750) 00:18:56.872 [2024-12-09 09:29:34.547975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.872 [2024-12-09 09:29:34.547992] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205abc0, cid 3, qid 0 00:18:56.872 [2024-12-09 09:29:34.548046] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:56.872 [2024-12-09 09:29:34.548052] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:56.872 [2024-12-09 09:29:34.548056] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:56.872 [2024-12-09 09:29:34.548060] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205abc0) on tqpair=0x1ff6750 00:18:56.872 [2024-12-09 09:29:34.548065] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:18:56.872 [2024-12-09 09:29:34.548071] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:18:56.872 [2024-12-09 09:29:34.548080] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:56.872 [2024-12-09 09:29:34.548084] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:56.872 [2024-12-09 09:29:34.548088] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ff6750) 00:18:56.872 [2024-12-09 09:29:34.548095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.872 [2024-12-09 09:29:34.548108] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205abc0, cid 3, qid 0 00:18:56.872 [2024-12-09 09:29:34.548146] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:56.872 [2024-12-09 09:29:34.548153] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:56.872 [2024-12-09 09:29:34.548156] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:56.872 [2024-12-09 09:29:34.548161] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205abc0) on tqpair=0x1ff6750 00:18:56.872 [2024-12-09 09:29:34.548171] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:56.872 [2024-12-09 09:29:34.548179] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:56.872 [2024-12-09 09:29:34.548186] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ff6750) 00:18:56.872 [2024-12-09 09:29:34.548194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.872 [2024-12-09 09:29:34.548208] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205abc0, cid 3, qid 0 00:18:56.872 [2024-12-09 09:29:34.548249] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:56.872 [2024-12-09 09:29:34.548255] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:56.872 [2024-12-09 09:29:34.548259] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:56.872 [2024-12-09 09:29:34.548264] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205abc0) on tqpair=0x1ff6750 00:18:56.872 [2024-12-09 09:29:34.548273] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:56.872 [2024-12-09 09:29:34.548277] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:56.872 [2024-12-09 09:29:34.548281] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ff6750) 00:18:56.872 [2024-12-09 09:29:34.548287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.872 [2024-12-09 09:29:34.548301] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205abc0, cid 3, qid 0 00:18:56.872 [2024-12-09 09:29:34.548342] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:56.872 [2024-12-09 09:29:34.548348] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:56.872 [2024-12-09 09:29:34.548352] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:56.872 [2024-12-09 09:29:34.548356] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205abc0) on tqpair=0x1ff6750 00:18:56.872 [2024-12-09 09:29:34.548365] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:56.872 [2024-12-09 09:29:34.548370] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:56.872 [2024-12-09 09:29:34.548374] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ff6750) 00:18:56.872 [2024-12-09 09:29:34.548380] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.872 [2024-12-09 09:29:34.548393] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205abc0, cid 3, qid 0 00:18:56.872 [2024-12-09 09:29:34.548426] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:56.872 [2024-12-09 09:29:34.548432] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:56.872 [2024-12-09 09:29:34.548436] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:56.873 [2024-12-09 09:29:34.548441] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205abc0) on tqpair=0x1ff6750 00:18:56.873 [2024-12-09 09:29:34.548450] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:56.873 [2024-12-09 09:29:34.548454] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:56.873 [2024-12-09 09:29:34.548458] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ff6750) 00:18:56.873 [2024-12-09 09:29:34.548477] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.873 [2024-12-09 09:29:34.548491] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205abc0, cid 3, qid 0 00:18:56.873 [2024-12-09 09:29:34.548528] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:56.873 [2024-12-09 09:29:34.548534] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:56.873 [2024-12-09 09:29:34.548538] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:56.873 [2024-12-09 09:29:34.548542] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205abc0) on tqpair=0x1ff6750 00:18:56.873 [2024-12-09 09:29:34.548551] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:56.873 [2024-12-09 09:29:34.548556] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:56.873 [2024-12-09 09:29:34.548560] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ff6750) 00:18:56.873 [2024-12-09 09:29:34.548566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.873 [2024-12-09 09:29:34.548580] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205abc0, cid 3, qid 0 00:18:56.873 [2024-12-09 09:29:34.548618] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:56.873 [2024-12-09 09:29:34.548624] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:56.873 [2024-12-09 09:29:34.548628] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:56.873 [2024-12-09 09:29:34.548632] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205abc0) on tqpair=0x1ff6750 00:18:56.873 [2024-12-09 09:29:34.548641] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:56.873 [2024-12-09 09:29:34.548646] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:56.873 [2024-12-09 09:29:34.548650] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ff6750) 00:18:56.873 [2024-12-09 09:29:34.548656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.873 [2024-12-09 09:29:34.548670] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205abc0, cid 3, qid 0 00:18:56.873 [2024-12-09 09:29:34.548702] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:56.873 [2024-12-09 09:29:34.548709] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:56.873 [2024-12-09 09:29:34.548713] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:56.873 [2024-12-09 09:29:34.548717] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205abc0) on tqpair=0x1ff6750 00:18:56.873 [2024-12-09 09:29:34.548726] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:56.873 [2024-12-09 09:29:34.548731] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:56.873 [2024-12-09 09:29:34.548735] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ff6750) 00:18:56.873 [2024-12-09 09:29:34.548741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.873 [2024-12-09 09:29:34.548755] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205abc0, cid 3, qid 0 00:18:56.873 [2024-12-09 09:29:34.548792] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:56.873 [2024-12-09 09:29:34.548799] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:56.873 [2024-12-09 09:29:34.548803] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:56.873 [2024-12-09 09:29:34.548807] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205abc0) on tqpair=0x1ff6750 00:18:56.873 [2024-12-09 09:29:34.548816] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:56.873 [2024-12-09 09:29:34.548820] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:56.873 [2024-12-09 09:29:34.548824] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ff6750) 00:18:56.873 [2024-12-09 09:29:34.548831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.873 [2024-12-09 09:29:34.548844] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205abc0, cid 3, qid 0 00:18:56.873 [2024-12-09 09:29:34.548888] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:56.873 [2024-12-09 09:29:34.548894] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:56.873 [2024-12-09 09:29:34.548898] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:56.873 [2024-12-09 09:29:34.548902] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205abc0) on tqpair=0x1ff6750 00:18:56.873 [2024-12-09 09:29:34.548911] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:56.873 [2024-12-09 09:29:34.548916] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:56.873 [2024-12-09 09:29:34.548920] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ff6750) 00:18:56.873 [2024-12-09 09:29:34.548926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.873 [2024-12-09 09:29:34.548940] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205abc0, cid 3, qid 0 00:18:56.873 [2024-12-09 09:29:34.548981] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:56.873 [2024-12-09 09:29:34.548987] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:56.873 [2024-12-09 09:29:34.548991] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:56.873 [2024-12-09 09:29:34.548995] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205abc0) on tqpair=0x1ff6750 00:18:56.873 [2024-12-09 09:29:34.549004] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:56.873 [2024-12-09 09:29:34.549009] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:56.873 [2024-12-09 09:29:34.549013] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ff6750) 00:18:56.873 [2024-12-09 09:29:34.549019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.873 [2024-12-09 09:29:34.549033] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205abc0, cid 3, qid 0 00:18:56.873 [2024-12-09 09:29:34.549070] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:56.873 [2024-12-09 09:29:34.549077] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:56.873 [2024-12-09 09:29:34.549081] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:56.873 [2024-12-09 09:29:34.549085] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205abc0) on tqpair=0x1ff6750 00:18:56.873 [2024-12-09 09:29:34.549094] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:56.873 [2024-12-09 09:29:34.549098] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:56.873 [2024-12-09 09:29:34.549102] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ff6750) 00:18:56.873 [2024-12-09 09:29:34.549109] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.873 [2024-12-09 09:29:34.549123] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205abc0, cid 3, qid 0 00:18:56.873 [2024-12-09 09:29:34.549162] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:56.873 [2024-12-09 09:29:34.549171] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:56.873 [2024-12-09 09:29:34.549177] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:56.873 [2024-12-09 09:29:34.549181] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205abc0) on tqpair=0x1ff6750 00:18:56.873 [2024-12-09 09:29:34.549190] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:56.874 [2024-12-09 09:29:34.549195] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:56.874 [2024-12-09 09:29:34.549199] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ff6750) 00:18:56.874 [2024-12-09 09:29:34.549205] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.874 [2024-12-09 09:29:34.549219] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205abc0, cid 3, qid 0 00:18:56.874 [2024-12-09 09:29:34.549257] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:56.874 [2024-12-09 09:29:34.549264] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:56.874 [2024-12-09 09:29:34.549268] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:56.874 [2024-12-09 09:29:34.549272] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205abc0) on tqpair=0x1ff6750 00:18:56.874 [2024-12-09 09:29:34.549281] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:56.874 [2024-12-09 09:29:34.549285] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:56.874 [2024-12-09 09:29:34.549289] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ff6750) 00:18:56.874 [2024-12-09 09:29:34.549296] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.874 [2024-12-09 09:29:34.549309] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205abc0, cid 3, qid 0 00:18:56.874 [2024-12-09 09:29:34.549347] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:56.874 [2024-12-09 09:29:34.549353] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:56.874 [2024-12-09 09:29:34.549357] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:56.874 [2024-12-09 09:29:34.549361] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205abc0) on tqpair=0x1ff6750 00:18:56.874 [2024-12-09 09:29:34.549371] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:56.874 [2024-12-09 09:29:34.549375] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:56.874 [2024-12-09 09:29:34.549379] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ff6750) 00:18:56.874 [2024-12-09 09:29:34.549386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.874 [2024-12-09 09:29:34.549399] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205abc0, cid 3, qid 0 00:18:56.874 [2024-12-09 09:29:34.549437] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:56.874 [2024-12-09 09:29:34.549443] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:56.874 [2024-12-09 09:29:34.549447] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:56.874 [2024-12-09 09:29:34.549452] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205abc0) on tqpair=0x1ff6750 00:18:56.874 [2024-12-09 09:29:34.553476] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:56.874 [2024-12-09 09:29:34.553496] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:56.874 [2024-12-09 09:29:34.553501] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ff6750) 00:18:56.874 [2024-12-09 09:29:34.553509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.874 [2024-12-09 09:29:34.553532] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x205abc0, cid 3, qid 0 00:18:56.874 [2024-12-09 09:29:34.553574] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:56.874 [2024-12-09 09:29:34.553581] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:56.874 [2024-12-09 09:29:34.553585] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:56.874 [2024-12-09 09:29:34.553590] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x205abc0) on tqpair=0x1ff6750 00:18:56.874 [2024-12-09 09:29:34.553598] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:18:56.874 00:18:56.874 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:18:57.136 [2024-12-09 09:29:34.596258] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:18:57.136 [2024-12-09 09:29:34.596318] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73862 ] 00:18:57.136 [2024-12-09 09:29:34.756399] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:18:57.136 [2024-12-09 09:29:34.756455] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:18:57.136 [2024-12-09 09:29:34.756470] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:18:57.136 [2024-12-09 09:29:34.756484] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:18:57.136 [2024-12-09 09:29:34.756497] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:18:57.136 [2024-12-09 09:29:34.756775] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:18:57.136 [2024-12-09 09:29:34.756811] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1ed9750 0 00:18:57.136 [2024-12-09 09:29:34.763477] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:18:57.136 [2024-12-09 09:29:34.763498] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:18:57.136 [2024-12-09 09:29:34.763504] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:18:57.136 [2024-12-09 09:29:34.763507] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:18:57.136 [2024-12-09 09:29:34.763537] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:57.136 [2024-12-09 09:29:34.763543] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:57.136 [2024-12-09 09:29:34.763547] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ed9750) 00:18:57.136 [2024-12-09 09:29:34.763558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:18:57.136 [2024-12-09 09:29:34.763583] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3d740, cid 0, qid 0 00:18:57.136 [2024-12-09 09:29:34.771478] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:57.137 [2024-12-09 09:29:34.771496] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:57.137 [2024-12-09 09:29:34.771501] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:57.137 [2024-12-09 09:29:34.771506] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3d740) on tqpair=0x1ed9750 00:18:57.137 [2024-12-09 09:29:34.771517] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:18:57.137 [2024-12-09 09:29:34.771524] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:18:57.137 [2024-12-09 09:29:34.771530] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:18:57.137 [2024-12-09 09:29:34.771546] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:57.137 [2024-12-09 09:29:34.771551] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:57.137 [2024-12-09 09:29:34.771555] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ed9750) 00:18:57.137 [2024-12-09 09:29:34.771562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.137 [2024-12-09 09:29:34.771583] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3d740, cid 0, qid 0 00:18:57.137 [2024-12-09 09:29:34.771621] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:57.137 [2024-12-09 09:29:34.771627] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:57.137 [2024-12-09 09:29:34.771631] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:57.137 [2024-12-09 09:29:34.771635] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3d740) on tqpair=0x1ed9750 00:18:57.137 [2024-12-09 09:29:34.771642] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:18:57.137 [2024-12-09 09:29:34.771649] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:18:57.137 [2024-12-09 09:29:34.771656] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:57.137 [2024-12-09 09:29:34.771660] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:57.137 [2024-12-09 09:29:34.771664] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ed9750) 00:18:57.137 [2024-12-09 09:29:34.771670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.137 [2024-12-09 09:29:34.771683] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3d740, cid 0, qid 0 00:18:57.137 [2024-12-09 09:29:34.771716] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:57.137 [2024-12-09 09:29:34.771721] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:57.137 [2024-12-09 09:29:34.771725] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:57.137 [2024-12-09 09:29:34.771729] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3d740) on tqpair=0x1ed9750 00:18:57.137 [2024-12-09 09:29:34.771734] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:18:57.137 [2024-12-09 09:29:34.771742] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:18:57.137 [2024-12-09 09:29:34.771749] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:57.137 [2024-12-09 09:29:34.771752] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:57.137 [2024-12-09 09:29:34.771756] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ed9750) 00:18:57.137 [2024-12-09 09:29:34.771762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.137 [2024-12-09 09:29:34.771775] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3d740, cid 0, qid 0 00:18:57.137 [2024-12-09 09:29:34.771813] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:57.137 [2024-12-09 09:29:34.771818] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:57.137 [2024-12-09 09:29:34.771822] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:57.137 [2024-12-09 09:29:34.771826] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3d740) on tqpair=0x1ed9750 00:18:57.137 [2024-12-09 09:29:34.771831] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:57.137 [2024-12-09 09:29:34.771840] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:57.137 [2024-12-09 09:29:34.771844] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:57.137 [2024-12-09 09:29:34.771848] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ed9750) 00:18:57.137 [2024-12-09 09:29:34.771854] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.137 [2024-12-09 09:29:34.771866] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3d740, cid 0, qid 0 00:18:57.137 [2024-12-09 09:29:34.771901] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:57.137 [2024-12-09 09:29:34.771907] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:57.137 [2024-12-09 09:29:34.771911] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:57.137 [2024-12-09 09:29:34.771915] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3d740) on tqpair=0x1ed9750 00:18:57.137 [2024-12-09 09:29:34.771919] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:18:57.137 [2024-12-09 09:29:34.771925] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:18:57.137 [2024-12-09 09:29:34.771933] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:57.137 [2024-12-09 09:29:34.772039] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:18:57.137 [2024-12-09 09:29:34.772045] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:57.137 [2024-12-09 09:29:34.772053] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:57.137 [2024-12-09 09:29:34.772057] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:57.137 [2024-12-09 09:29:34.772061] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ed9750) 00:18:57.137 [2024-12-09 09:29:34.772067] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.137 [2024-12-09 09:29:34.772080] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3d740, cid 0, qid 0 00:18:57.137 [2024-12-09 09:29:34.772115] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:57.137 [2024-12-09 09:29:34.772121] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:57.137 [2024-12-09 09:29:34.772124] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:57.137 [2024-12-09 09:29:34.772128] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3d740) on tqpair=0x1ed9750 00:18:57.137 [2024-12-09 09:29:34.772133] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:57.137 [2024-12-09 09:29:34.772141] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:57.137 [2024-12-09 09:29:34.772146] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:57.137 [2024-12-09 09:29:34.772149] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ed9750) 00:18:57.137 [2024-12-09 09:29:34.772155] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.137 [2024-12-09 09:29:34.772168] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3d740, cid 0, qid 0 00:18:57.137 [2024-12-09 09:29:34.772205] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:57.137 [2024-12-09 09:29:34.772211] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:57.137 [2024-12-09 09:29:34.772215] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:57.137 [2024-12-09 09:29:34.772219] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3d740) on tqpair=0x1ed9750 00:18:57.137 [2024-12-09 09:29:34.772223] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:57.137 [2024-12-09 09:29:34.772228] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:18:57.137 [2024-12-09 09:29:34.772236] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:18:57.138 [2024-12-09 09:29:34.772245] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:18:57.138 [2024-12-09 09:29:34.772254] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:57.138 [2024-12-09 09:29:34.772258] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ed9750) 00:18:57.138 [2024-12-09 09:29:34.772264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.138 [2024-12-09 09:29:34.772277] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3d740, cid 0, qid 0 00:18:57.138 [2024-12-09 09:29:34.772355] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:57.138 [2024-12-09 09:29:34.772361] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:57.138 [2024-12-09 09:29:34.772365] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:57.138 [2024-12-09 09:29:34.772369] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ed9750): datao=0, datal=4096, cccid=0 00:18:57.138 [2024-12-09 09:29:34.772374] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f3d740) on tqpair(0x1ed9750): expected_datao=0, payload_size=4096 00:18:57.138 [2024-12-09 09:29:34.772379] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:57.138 [2024-12-09 09:29:34.772386] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:57.138 [2024-12-09 09:29:34.772390] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:57.138 [2024-12-09 09:29:34.772398] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:57.138 [2024-12-09 09:29:34.772404] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:57.138 [2024-12-09 09:29:34.772407] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:57.138 [2024-12-09 09:29:34.772411] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3d740) on tqpair=0x1ed9750 00:18:57.138 [2024-12-09 09:29:34.772419] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:18:57.138 [2024-12-09 09:29:34.772424] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:18:57.138 [2024-12-09 09:29:34.772429] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:18:57.138 [2024-12-09 09:29:34.772434] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:18:57.138 [2024-12-09 09:29:34.772442] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:18:57.138 [2024-12-09 09:29:34.772447] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:18:57.138 [2024-12-09 09:29:34.772456] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:18:57.138 [2024-12-09 09:29:34.772473] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:57.138 [2024-12-09 09:29:34.772478] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:57.138 [2024-12-09 09:29:34.772481] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ed9750) 00:18:57.138 [2024-12-09 09:29:34.772488] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:57.138 [2024-12-09 09:29:34.772502] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3d740, cid 0, qid 0 00:18:57.138 [2024-12-09 09:29:34.772538] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:57.138 [2024-12-09 09:29:34.772544] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:57.138 [2024-12-09 09:29:34.772548] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:57.138 [2024-12-09 09:29:34.772552] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3d740) on tqpair=0x1ed9750 00:18:57.138 [2024-12-09 09:29:34.772562] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:57.138 [2024-12-09 09:29:34.772566] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:57.138 [2024-12-09 09:29:34.772570] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ed9750) 00:18:57.138 [2024-12-09 09:29:34.772575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:57.138 [2024-12-09 09:29:34.772581] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:57.138 [2024-12-09 09:29:34.772585] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:57.138 [2024-12-09 09:29:34.772589] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1ed9750) 00:18:57.138 [2024-12-09 09:29:34.772594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:57.138 [2024-12-09 09:29:34.772600] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:57.138 [2024-12-09 09:29:34.772604] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:57.138 [2024-12-09 09:29:34.772608] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1ed9750) 00:18:57.138 [2024-12-09 09:29:34.772613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:57.138 [2024-12-09 09:29:34.772619] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:57.138 [2024-12-09 09:29:34.772623] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:57.138 [2024-12-09 09:29:34.772627] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ed9750) 00:18:57.138 [2024-12-09 09:29:34.772632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:57.138 [2024-12-09 09:29:34.772637] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:18:57.138 [2024-12-09 09:29:34.772645] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:57.138 [2024-12-09 09:29:34.772651] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:57.138 [2024-12-09 09:29:34.772655] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ed9750) 00:18:57.138 [2024-12-09 09:29:34.772661] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.138 [2024-12-09 09:29:34.772676] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3d740, cid 0, qid 0 00:18:57.138 [2024-12-09 09:29:34.772681] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3d8c0, cid 1, qid 0 00:18:57.138 [2024-12-09 09:29:34.772686] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3da40, cid 2, qid 0 00:18:57.138 [2024-12-09 09:29:34.772690] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3dbc0, cid 3, qid 0 00:18:57.138 [2024-12-09 09:29:34.772695] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3dd40, cid 4, qid 0 00:18:57.138 [2024-12-09 09:29:34.772760] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:57.138 [2024-12-09 09:29:34.772765] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:57.138 [2024-12-09 09:29:34.772769] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:57.138 [2024-12-09 09:29:34.772773] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3dd40) on tqpair=0x1ed9750 00:18:57.138 [2024-12-09 09:29:34.772781] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:18:57.138 [2024-12-09 09:29:34.772787] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:57.138 [2024-12-09 09:29:34.772795] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:18:57.138 [2024-12-09 09:29:34.772801] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:18:57.138 [2024-12-09 09:29:34.772807] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:57.138 [2024-12-09 09:29:34.772811] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:57.138 [2024-12-09 09:29:34.772815] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ed9750) 00:18:57.138 [2024-12-09 09:29:34.772821] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:57.138 [2024-12-09 09:29:34.772834] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3dd40, cid 4, qid 0 00:18:57.138 [2024-12-09 09:29:34.772882] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:57.139 [2024-12-09 09:29:34.772888] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:57.139 [2024-12-09 09:29:34.772892] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:57.139 [2024-12-09 09:29:34.772896] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3dd40) on tqpair=0x1ed9750 00:18:57.139 [2024-12-09 09:29:34.772947] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:18:57.139 [2024-12-09 09:29:34.772956] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:18:57.139 [2024-12-09 09:29:34.772963] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:57.139 [2024-12-09 09:29:34.772967] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ed9750) 00:18:57.139 [2024-12-09 09:29:34.772973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.139 [2024-12-09 09:29:34.772986] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3dd40, cid 4, qid 0 00:18:57.139 [2024-12-09 09:29:34.773035] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:57.139 [2024-12-09 09:29:34.773041] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:57.139 [2024-12-09 09:29:34.773045] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:57.139 [2024-12-09 09:29:34.773049] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ed9750): datao=0, datal=4096, cccid=4 00:18:57.139 [2024-12-09 09:29:34.773054] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f3dd40) on tqpair(0x1ed9750): expected_datao=0, payload_size=4096 00:18:57.139 [2024-12-09 09:29:34.773058] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:57.139 [2024-12-09 09:29:34.773065] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:57.139 [2024-12-09 09:29:34.773069] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:57.139 [2024-12-09 09:29:34.773076] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:57.139 [2024-12-09 09:29:34.773082] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:57.139 [2024-12-09 09:29:34.773086] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:57.139 [2024-12-09 09:29:34.773090] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3dd40) on tqpair=0x1ed9750 00:18:57.139 [2024-12-09 09:29:34.773104] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:18:57.139 [2024-12-09 09:29:34.773114] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:18:57.139 [2024-12-09 09:29:34.773123] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:18:57.139 [2024-12-09 09:29:34.773129] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:57.139 [2024-12-09 09:29:34.773133] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ed9750) 00:18:57.139 [2024-12-09 09:29:34.773139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.139 [2024-12-09 09:29:34.773153] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3dd40, cid 4, qid 0 00:18:57.139 [2024-12-09 09:29:34.773230] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:57.139 [2024-12-09 09:29:34.773236] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:57.139 [2024-12-09 09:29:34.773240] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:57.139 [2024-12-09 09:29:34.773244] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ed9750): datao=0, datal=4096, cccid=4 00:18:57.139 [2024-12-09 09:29:34.773248] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f3dd40) on tqpair(0x1ed9750): expected_datao=0, payload_size=4096 00:18:57.139 [2024-12-09 09:29:34.773253] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:57.139 [2024-12-09 09:29:34.773259] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:57.139 [2024-12-09 09:29:34.773263] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:57.139 [2024-12-09 09:29:34.773270] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:57.139 [2024-12-09 09:29:34.773276] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:57.139 [2024-12-09 09:29:34.773280] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:57.139 [2024-12-09 09:29:34.773284] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3dd40) on tqpair=0x1ed9750 00:18:57.139 [2024-12-09 09:29:34.773296] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:57.139 [2024-12-09 09:29:34.773304] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:57.139 [2024-12-09 09:29:34.773311] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:57.139 [2024-12-09 09:29:34.773315] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ed9750) 00:18:57.139 [2024-12-09 09:29:34.773321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.139 [2024-12-09 09:29:34.773334] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3dd40, cid 4, qid 0 00:18:57.139 [2024-12-09 09:29:34.773375] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:57.139 [2024-12-09 09:29:34.773381] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:57.139 [2024-12-09 09:29:34.773385] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:57.139 [2024-12-09 09:29:34.773389] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ed9750): datao=0, datal=4096, cccid=4 00:18:57.139 [2024-12-09 09:29:34.773394] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f3dd40) on tqpair(0x1ed9750): expected_datao=0, payload_size=4096 00:18:57.139 [2024-12-09 09:29:34.773398] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:57.139 [2024-12-09 09:29:34.773404] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:57.139 [2024-12-09 09:29:34.773408] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:57.139 [2024-12-09 09:29:34.773415] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:57.139 [2024-12-09 09:29:34.773421] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:57.139 [2024-12-09 09:29:34.773425] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:57.139 [2024-12-09 09:29:34.773429] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3dd40) on tqpair=0x1ed9750 00:18:57.139 [2024-12-09 09:29:34.773436] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:57.139 [2024-12-09 09:29:34.773443] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:18:57.139 [2024-12-09 09:29:34.773455] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:18:57.139 [2024-12-09 09:29:34.773470] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:18:57.139 [2024-12-09 09:29:34.773476] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:57.139 [2024-12-09 09:29:34.773481] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:18:57.139 [2024-12-09 09:29:34.773486] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:18:57.139 [2024-12-09 09:29:34.773492] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:18:57.139 [2024-12-09 09:29:34.773497] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:18:57.139 [2024-12-09 09:29:34.773511] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:57.139 [2024-12-09 09:29:34.773515] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ed9750) 00:18:57.139 [2024-12-09 09:29:34.773521] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.139 [2024-12-09 09:29:34.773528] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:57.139 [2024-12-09 09:29:34.773531] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:57.139 [2024-12-09 09:29:34.773535] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ed9750) 00:18:57.140 [2024-12-09 09:29:34.773541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:18:57.140 [2024-12-09 09:29:34.773559] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3dd40, cid 4, qid 0 00:18:57.140 [2024-12-09 09:29:34.773565] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3dec0, cid 5, qid 0 00:18:57.140 [2024-12-09 09:29:34.773611] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:57.140 [2024-12-09 09:29:34.773617] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:57.140 [2024-12-09 09:29:34.773621] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:57.140 [2024-12-09 09:29:34.773625] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3dd40) on tqpair=0x1ed9750 00:18:57.140 [2024-12-09 09:29:34.773631] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:57.140 [2024-12-09 09:29:34.773637] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:57.140 [2024-12-09 09:29:34.773640] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:57.140 [2024-12-09 09:29:34.773644] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3dec0) on tqpair=0x1ed9750 00:18:57.140 [2024-12-09 09:29:34.773654] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:57.140 [2024-12-09 09:29:34.773658] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ed9750) 00:18:57.140 [2024-12-09 09:29:34.773663] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.140 [2024-12-09 09:29:34.773676] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3dec0, cid 5, qid 0 00:18:57.140 [2024-12-09 09:29:34.773714] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:57.140 [2024-12-09 09:29:34.773720] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:57.140 [2024-12-09 09:29:34.773723] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:57.140 [2024-12-09 09:29:34.773727] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3dec0) on tqpair=0x1ed9750 00:18:57.140 [2024-12-09 09:29:34.773736] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:57.140 [2024-12-09 09:29:34.773740] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ed9750) 00:18:57.140 [2024-12-09 09:29:34.773746] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.140 [2024-12-09 09:29:34.773759] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3dec0, cid 5, qid 0 00:18:57.140 [2024-12-09 09:29:34.773796] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:57.140 [2024-12-09 09:29:34.773802] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:57.140 [2024-12-09 09:29:34.773806] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:57.140 [2024-12-09 09:29:34.773810] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3dec0) on tqpair=0x1ed9750 00:18:57.140 [2024-12-09 09:29:34.773819] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:57.140 [2024-12-09 09:29:34.773823] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ed9750) 00:18:57.140 [2024-12-09 09:29:34.773829] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.140 [2024-12-09 09:29:34.773841] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3dec0, cid 5, qid 0 00:18:57.140 [2024-12-09 09:29:34.773875] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:57.140 [2024-12-09 09:29:34.773880] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:57.140 [2024-12-09 09:29:34.773884] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:57.140 [2024-12-09 09:29:34.773888] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3dec0) on tqpair=0x1ed9750 00:18:57.140 [2024-12-09 09:29:34.773903] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:57.140 [2024-12-09 09:29:34.773907] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ed9750) 00:18:57.140 [2024-12-09 09:29:34.773913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.140 [2024-12-09 09:29:34.773920] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:57.140 [2024-12-09 09:29:34.773924] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ed9750) 00:18:57.140 [2024-12-09 09:29:34.773930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.140 [2024-12-09 09:29:34.773937] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:57.140 [2024-12-09 09:29:34.773941] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1ed9750) 00:18:57.140 [2024-12-09 09:29:34.773947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.140 [2024-12-09 09:29:34.773957] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:57.140 [2024-12-09 09:29:34.773961] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1ed9750) 00:18:57.140 [2024-12-09 09:29:34.773967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.140 [2024-12-09 09:29:34.773980] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3dec0, cid 5, qid 0 00:18:57.140 [2024-12-09 09:29:34.773986] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3dd40, cid 4, qid 0 00:18:57.140 [2024-12-09 09:29:34.773990] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3e040, cid 6, qid 0 00:18:57.140 [2024-12-09 09:29:34.773995] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3e1c0, cid 7, qid 0 00:18:57.140 [2024-12-09 09:29:34.774123] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:57.140 [2024-12-09 09:29:34.774130] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:57.140 [2024-12-09 09:29:34.774134] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:57.140 [2024-12-09 09:29:34.774138] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ed9750): datao=0, datal=8192, cccid=5 00:18:57.140 [2024-12-09 09:29:34.774143] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f3dec0) on tqpair(0x1ed9750): expected_datao=0, payload_size=8192 00:18:57.140 [2024-12-09 09:29:34.774148] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:57.140 [2024-12-09 09:29:34.774163] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:57.140 [2024-12-09 09:29:34.774167] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:57.140 [2024-12-09 09:29:34.774173] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:57.140 [2024-12-09 09:29:34.774179] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:57.140 [2024-12-09 09:29:34.774183] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:57.140 [2024-12-09 09:29:34.774187] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ed9750): datao=0, datal=512, cccid=4 00:18:57.140 [2024-12-09 09:29:34.774192] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f3dd40) on tqpair(0x1ed9750): expected_datao=0, payload_size=512 00:18:57.140 [2024-12-09 09:29:34.774197] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:57.140 [2024-12-09 09:29:34.774203] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:57.140 [2024-12-09 09:29:34.774207] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:57.140 [2024-12-09 09:29:34.774213] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:57.140 [2024-12-09 09:29:34.774219] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:57.140 [2024-12-09 09:29:34.774223] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:57.140 [2024-12-09 09:29:34.774227] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ed9750): datao=0, datal=512, cccid=6 00:18:57.140 [2024-12-09 09:29:34.774232] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f3e040) on tqpair(0x1ed9750): expected_datao=0, payload_size=512 00:18:57.140 [2024-12-09 09:29:34.774236] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:57.140 ===================================================== 00:18:57.140 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:57.140 ===================================================== 00:18:57.140 Controller Capabilities/Features 00:18:57.140 ================================ 00:18:57.140 Vendor ID: 8086 00:18:57.140 Subsystem Vendor ID: 8086 00:18:57.140 Serial Number: SPDK00000000000001 00:18:57.140 Model Number: SPDK bdev Controller 00:18:57.141 Firmware Version: 25.01 00:18:57.141 Recommended Arb Burst: 6 00:18:57.141 IEEE OUI Identifier: e4 d2 5c 00:18:57.141 Multi-path I/O 00:18:57.141 May have multiple subsystem ports: Yes 00:18:57.141 May have multiple controllers: Yes 00:18:57.141 Associated with SR-IOV VF: No 00:18:57.141 Max Data Transfer Size: 131072 00:18:57.141 Max Number of Namespaces: 32 00:18:57.141 Max Number of I/O Queues: 127 00:18:57.141 NVMe Specification Version (VS): 1.3 00:18:57.141 NVMe Specification Version (Identify): 1.3 00:18:57.141 Maximum Queue Entries: 128 00:18:57.141 Contiguous Queues Required: Yes 00:18:57.141 Arbitration Mechanisms Supported 00:18:57.141 Weighted Round Robin: Not Supported 00:18:57.141 Vendor Specific: Not Supported 00:18:57.141 Reset Timeout: 15000 ms 00:18:57.141 Doorbell Stride: 4 bytes 00:18:57.141 NVM Subsystem Reset: Not Supported 00:18:57.141 Command Sets Supported 00:18:57.141 NVM Command Set: Supported 00:18:57.141 Boot Partition: Not Supported 00:18:57.141 Memory Page Size Minimum: 4096 bytes 00:18:57.141 Memory Page Size Maximum: 4096 bytes 00:18:57.141 Persistent Memory Region: Not Supported 00:18:57.141 Optional Asynchronous Events Supported 00:18:57.141 Namespace Attribute Notices: Supported 00:18:57.141 Firmware Activation Notices: Not Supported 00:18:57.141 ANA Change Notices: Not Supported 00:18:57.141 PLE Aggregate Log Change Notices: Not Supported 00:18:57.141 LBA Status Info Alert Notices: Not Supported 00:18:57.141 EGE Aggregate Log Change Notices: Not Supported 00:18:57.141 Normal NVM Subsystem Shutdown event: Not Supported 00:18:57.141 Zone Descriptor Change Notices: Not Supported 00:18:57.141 Discovery Log Change Notices: Not Supported 00:18:57.141 Controller Attributes 00:18:57.141 128-bit Host Identifier: Supported 00:18:57.141 Non-Operational Permissive Mode: Not Supported 00:18:57.141 NVM Sets: Not Supported 00:18:57.141 Read Recovery Levels: Not Supported 00:18:57.141 Endurance Groups: Not Supported 00:18:57.141 Predictable Latency Mode: Not Supported 00:18:57.141 Traffic Based Keep ALive: Not Supported 00:18:57.141 Namespace Granularity: Not Supported 00:18:57.141 SQ Associations: Not Supported 00:18:57.141 UUID List: Not Supported 00:18:57.141 Multi-Domain Subsystem: Not Supported 00:18:57.141 Fixed Capacity Management: Not Supported 00:18:57.141 Variable Capacity Management: Not Supported 00:18:57.141 Delete Endurance Group: Not Supported 00:18:57.141 Delete NVM Set: Not Supported 00:18:57.141 Extended LBA Formats Supported: Not Supported 00:18:57.141 Flexible Data Placement Supported: Not Supported 00:18:57.141 00:18:57.141 Controller Memory Buffer Support 00:18:57.141 ================================ 00:18:57.141 Supported: No 00:18:57.141 00:18:57.141 Persistent Memory Region Support 00:18:57.141 ================================ 00:18:57.141 Supported: No 00:18:57.141 00:18:57.141 Admin Command Set Attributes 00:18:57.141 ============================ 00:18:57.141 Security Send/Receive: Not Supported 00:18:57.141 Format NVM: Not Supported 00:18:57.141 Firmware Activate/Download: Not Supported 00:18:57.141 Namespace Management: Not Supported 00:18:57.141 Device Self-Test: Not Supported 00:18:57.141 Directives: Not Supported 00:18:57.141 NVMe-MI: Not Supported 00:18:57.141 Virtualization Management: Not Supported 00:18:57.141 Doorbell Buffer Config: Not Supported 00:18:57.141 Get LBA Status Capability: Not Supported 00:18:57.141 Command & Feature Lockdown Capability: Not Supported 00:18:57.141 Abort Command Limit: 4 00:18:57.141 Async Event Request Limit: 4 00:18:57.141 Number of Firmware Slots: N/A 00:18:57.141 Firmware Slot 1 Read-Only: N/A 00:18:57.141 Firmware Activation Without Reset: [2024-12-09 09:29:34.774243] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:57.141 [2024-12-09 09:29:34.774247] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:57.141 [2024-12-09 09:29:34.774253] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:57.141 [2024-12-09 09:29:34.774258] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:57.141 [2024-12-09 09:29:34.774262] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:57.141 [2024-12-09 09:29:34.774266] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ed9750): datao=0, datal=4096, cccid=7 00:18:57.141 [2024-12-09 09:29:34.774271] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f3e1c0) on tqpair(0x1ed9750): expected_datao=0, payload_size=4096 00:18:57.141 [2024-12-09 09:29:34.774276] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:57.141 [2024-12-09 09:29:34.774283] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:57.141 [2024-12-09 09:29:34.774287] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:57.141 [2024-12-09 09:29:34.774293] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:57.141 [2024-12-09 09:29:34.774298] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:57.141 [2024-12-09 09:29:34.774302] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:57.141 [2024-12-09 09:29:34.774307] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3dec0) on tqpair=0x1ed9750 00:18:57.141 [2024-12-09 09:29:34.774321] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:57.141 [2024-12-09 09:29:34.774327] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:57.141 [2024-12-09 09:29:34.774331] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:57.141 [2024-12-09 09:29:34.774335] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3dd40) on tqpair=0x1ed9750 00:18:57.141 [2024-12-09 09:29:34.774348] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:57.141 [2024-12-09 09:29:34.774354] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:57.141 [2024-12-09 09:29:34.774357] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:57.141 [2024-12-09 09:29:34.774362] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3e040) on tqpair=0x1ed9750 00:18:57.141 [2024-12-09 09:29:34.774369] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:57.141 [2024-12-09 09:29:34.774375] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:57.141 [2024-12-09 09:29:34.774379] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:57.141 [2024-12-09 09:29:34.774383] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3e1c0) on tqpair=0x1ed9750 00:18:57.141 N/A 00:18:57.141 Multiple Update Detection Support: N/A 00:18:57.141 Firmware Update Granularity: No Information Provided 00:18:57.141 Per-Namespace SMART Log: No 00:18:57.141 Asymmetric Namespace Access Log Page: Not Supported 00:18:57.141 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:18:57.141 Command Effects Log Page: Supported 00:18:57.141 Get Log Page Extended Data: Supported 00:18:57.141 Telemetry Log Pages: Not Supported 00:18:57.141 Persistent Event Log Pages: Not Supported 00:18:57.141 Supported Log Pages Log Page: May Support 00:18:57.141 Commands Supported & Effects Log Page: Not Supported 00:18:57.141 Feature Identifiers & Effects Log Page:May Support 00:18:57.141 NVMe-MI Commands & Effects Log Page: May Support 00:18:57.142 Data Area 4 for Telemetry Log: Not Supported 00:18:57.142 Error Log Page Entries Supported: 128 00:18:57.142 Keep Alive: Supported 00:18:57.142 Keep Alive Granularity: 10000 ms 00:18:57.142 00:18:57.142 NVM Command Set Attributes 00:18:57.142 ========================== 00:18:57.142 Submission Queue Entry Size 00:18:57.142 Max: 64 00:18:57.142 Min: 64 00:18:57.142 Completion Queue Entry Size 00:18:57.142 Max: 16 00:18:57.142 Min: 16 00:18:57.142 Number of Namespaces: 32 00:18:57.142 Compare Command: Supported 00:18:57.142 Write Uncorrectable Command: Not Supported 00:18:57.142 Dataset Management Command: Supported 00:18:57.142 Write Zeroes Command: Supported 00:18:57.142 Set Features Save Field: Not Supported 00:18:57.142 Reservations: Supported 00:18:57.142 Timestamp: Not Supported 00:18:57.142 Copy: Supported 00:18:57.142 Volatile Write Cache: Present 00:18:57.142 Atomic Write Unit (Normal): 1 00:18:57.142 Atomic Write Unit (PFail): 1 00:18:57.142 Atomic Compare & Write Unit: 1 00:18:57.142 Fused Compare & Write: Supported 00:18:57.142 Scatter-Gather List 00:18:57.142 SGL Command Set: Supported 00:18:57.142 SGL Keyed: Supported 00:18:57.142 SGL Bit Bucket Descriptor: Not Supported 00:18:57.142 SGL Metadata Pointer: Not Supported 00:18:57.142 Oversized SGL: Not Supported 00:18:57.142 SGL Metadata Address: Not Supported 00:18:57.142 SGL Offset: Supported 00:18:57.142 Transport SGL Data Block: Not Supported 00:18:57.142 Replay Protected Memory Block: Not Supported 00:18:57.142 00:18:57.142 Firmware Slot Information 00:18:57.142 ========================= 00:18:57.142 Active slot: 1 00:18:57.142 Slot 1 Firmware Revision: 25.01 00:18:57.142 00:18:57.142 00:18:57.142 Commands Supported and Effects 00:18:57.142 ============================== 00:18:57.142 Admin Commands 00:18:57.142 -------------- 00:18:57.142 Get Log Page (02h): Supported 00:18:57.142 Identify (06h): Supported 00:18:57.142 Abort (08h): Supported 00:18:57.142 Set Features (09h): Supported 00:18:57.142 Get Features (0Ah): Supported 00:18:57.142 Asynchronous Event Request (0Ch): Supported 00:18:57.142 Keep Alive (18h): Supported 00:18:57.142 I/O Commands 00:18:57.142 ------------ 00:18:57.142 Flush (00h): Supported LBA-Change 00:18:57.142 Write (01h): Supported LBA-Change 00:18:57.142 Read (02h): Supported 00:18:57.142 Compare (05h): Supported 00:18:57.142 Write Zeroes (08h): Supported LBA-Change 00:18:57.142 Dataset Management (09h): Supported LBA-Change 00:18:57.142 Copy (19h): Supported LBA-Change 00:18:57.142 00:18:57.142 Error Log 00:18:57.142 ========= 00:18:57.142 00:18:57.142 Arbitration 00:18:57.142 =========== 00:18:57.142 Arbitration Burst: 1 00:18:57.142 00:18:57.142 Power Management 00:18:57.142 ================ 00:18:57.142 Number of Power States: 1 00:18:57.142 Current Power State: Power State #0 00:18:57.142 Power State #0: 00:18:57.142 Max Power: 0.00 W 00:18:57.142 Non-Operational State: Operational 00:18:57.142 Entry Latency: Not Reported 00:18:57.142 Exit Latency: Not Reported 00:18:57.142 Relative Read Throughput: 0 00:18:57.142 Relative Read Latency: 0 00:18:57.142 Relative Write Throughput: 0 00:18:57.142 Relative Write Latency: 0 00:18:57.142 Idle Power: Not Reported 00:18:57.142 Active Power: Not Reported 00:18:57.142 Non-Operational Permissive Mode: Not Supported 00:18:57.142 00:18:57.142 Health Information 00:18:57.142 ================== 00:18:57.142 Critical Warnings: 00:18:57.142 Available Spare Space: OK 00:18:57.142 Temperature: OK 00:18:57.142 Device Reliability: OK 00:18:57.142 Read Only: No 00:18:57.142 Volatile Memory Backup: OK 00:18:57.142 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:57.142 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:57.142 Available Spare: 0% 00:18:57.142 Available Spare Threshold: 0% 00:18:57.142 Life Percentage Used:[2024-12-09 09:29:34.774485] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:57.142 [2024-12-09 09:29:34.774491] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1ed9750) 00:18:57.142 [2024-12-09 09:29:34.774498] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.142 [2024-12-09 09:29:34.774516] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3e1c0, cid 7, qid 0 00:18:57.142 [2024-12-09 09:29:34.774561] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:57.142 [2024-12-09 09:29:34.774568] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:57.142 [2024-12-09 09:29:34.774572] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:57.142 [2024-12-09 09:29:34.774576] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3e1c0) on tqpair=0x1ed9750 00:18:57.142 [2024-12-09 09:29:34.774609] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:18:57.142 [2024-12-09 09:29:34.774619] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3d740) on tqpair=0x1ed9750 00:18:57.142 [2024-12-09 09:29:34.774625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.142 [2024-12-09 09:29:34.774631] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3d8c0) on tqpair=0x1ed9750 00:18:57.142 [2024-12-09 09:29:34.774636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.143 [2024-12-09 09:29:34.774642] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3da40) on tqpair=0x1ed9750 00:18:57.143 [2024-12-09 09:29:34.774647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.143 [2024-12-09 09:29:34.774652] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3dbc0) on tqpair=0x1ed9750 00:18:57.143 [2024-12-09 09:29:34.774657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.143 [2024-12-09 09:29:34.774666] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:57.143 [2024-12-09 09:29:34.774670] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:57.143 [2024-12-09 09:29:34.774674] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ed9750) 00:18:57.143 [2024-12-09 09:29:34.774681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.143 [2024-12-09 09:29:34.774697] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3dbc0, cid 3, qid 0 00:18:57.143 [2024-12-09 09:29:34.774739] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:57.143 [2024-12-09 09:29:34.774745] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:57.143 [2024-12-09 09:29:34.774749] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:57.143 [2024-12-09 09:29:34.774753] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3dbc0) on tqpair=0x1ed9750 00:18:57.143 [2024-12-09 09:29:34.774760] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:57.143 [2024-12-09 09:29:34.774764] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:57.143 [2024-12-09 09:29:34.774768] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ed9750) 00:18:57.143 [2024-12-09 09:29:34.774775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.143 [2024-12-09 09:29:34.774791] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3dbc0, cid 3, qid 0 00:18:57.143 [2024-12-09 09:29:34.774842] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:57.143 [2024-12-09 09:29:34.774848] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:57.143 [2024-12-09 09:29:34.774852] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:57.143 [2024-12-09 09:29:34.774856] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3dbc0) on tqpair=0x1ed9750 00:18:57.143 [2024-12-09 09:29:34.774861] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:18:57.143 [2024-12-09 09:29:34.774866] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:18:57.143 [2024-12-09 09:29:34.774875] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:57.143 [2024-12-09 09:29:34.774880] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:57.143 [2024-12-09 09:29:34.774884] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ed9750) 00:18:57.143 [2024-12-09 09:29:34.774890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.143 [2024-12-09 09:29:34.774903] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3dbc0, cid 3, qid 0 00:18:57.143 [2024-12-09 09:29:34.774937] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:57.143 [2024-12-09 09:29:34.774943] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:57.143 [2024-12-09 09:29:34.774947] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:57.143 [2024-12-09 09:29:34.774952] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3dbc0) on tqpair=0x1ed9750 00:18:57.143 [2024-12-09 09:29:34.774961] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:57.143 [2024-12-09 09:29:34.774966] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:57.143 [2024-12-09 09:29:34.774970] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ed9750) 00:18:57.143 [2024-12-09 09:29:34.774976] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.143 [2024-12-09 09:29:34.774989] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3dbc0, cid 3, qid 0 00:18:57.143 [2024-12-09 09:29:34.775023] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:57.143 [2024-12-09 09:29:34.775030] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:57.143 [2024-12-09 09:29:34.775033] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:57.143 [2024-12-09 09:29:34.775038] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3dbc0) on tqpair=0x1ed9750 00:18:57.143 [2024-12-09 09:29:34.775047] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:57.143 [2024-12-09 09:29:34.775051] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:57.143 [2024-12-09 09:29:34.775055] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ed9750) 00:18:57.143 [2024-12-09 09:29:34.775061] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.143 [2024-12-09 09:29:34.775074] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3dbc0, cid 3, qid 0 00:18:57.143 [2024-12-09 09:29:34.775111] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:57.143 [2024-12-09 09:29:34.775117] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:57.143 [2024-12-09 09:29:34.775121] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:57.143 [2024-12-09 09:29:34.775125] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3dbc0) on tqpair=0x1ed9750 00:18:57.143 [2024-12-09 09:29:34.775134] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:57.143 [2024-12-09 09:29:34.775138] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:57.143 [2024-12-09 09:29:34.775142] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ed9750) 00:18:57.143 [2024-12-09 09:29:34.775149] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.143 [2024-12-09 09:29:34.775162] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3dbc0, cid 3, qid 0 00:18:57.143 [2024-12-09 09:29:34.775198] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:57.143 [2024-12-09 09:29:34.775204] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:57.143 [2024-12-09 09:29:34.775208] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:57.143 [2024-12-09 09:29:34.775213] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3dbc0) on tqpair=0x1ed9750 00:18:57.143 [2024-12-09 09:29:34.775222] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:57.143 [2024-12-09 09:29:34.775226] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:57.143 [2024-12-09 09:29:34.775230] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ed9750) 00:18:57.143 [2024-12-09 09:29:34.775237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.143 [2024-12-09 09:29:34.775249] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3dbc0, cid 3, qid 0 00:18:57.143 [2024-12-09 09:29:34.775289] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:57.143 [2024-12-09 09:29:34.775295] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:57.143 [2024-12-09 09:29:34.775299] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:57.143 [2024-12-09 09:29:34.775303] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3dbc0) on tqpair=0x1ed9750 00:18:57.143 [2024-12-09 09:29:34.775312] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:57.143 [2024-12-09 09:29:34.775317] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:57.143 [2024-12-09 09:29:34.775321] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ed9750) 00:18:57.143 [2024-12-09 09:29:34.775327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.143 [2024-12-09 09:29:34.775340] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3dbc0, cid 3, qid 0 00:18:57.143 [2024-12-09 09:29:34.775374] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:57.144 [2024-12-09 09:29:34.775381] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:57.144 [2024-12-09 09:29:34.775385] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:57.144 [2024-12-09 09:29:34.775389] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3dbc0) on tqpair=0x1ed9750 00:18:57.144 [2024-12-09 09:29:34.775398] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:57.144 [2024-12-09 09:29:34.775402] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:57.144 [2024-12-09 09:29:34.775406] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ed9750) 00:18:57.144 [2024-12-09 09:29:34.775413] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.144 [2024-12-09 09:29:34.775426] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3dbc0, cid 3, qid 0 00:18:57.144 [2024-12-09 09:29:34.779472] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:57.144 [2024-12-09 09:29:34.779490] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:57.144 [2024-12-09 09:29:34.779494] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:57.144 [2024-12-09 09:29:34.779498] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3dbc0) on tqpair=0x1ed9750 00:18:57.144 [2024-12-09 09:29:34.779510] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:57.144 [2024-12-09 09:29:34.779514] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:57.144 [2024-12-09 09:29:34.779518] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ed9750) 00:18:57.144 [2024-12-09 09:29:34.779525] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.144 [2024-12-09 09:29:34.779542] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f3dbc0, cid 3, qid 0 00:18:57.144 [2024-12-09 09:29:34.779574] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:57.144 [2024-12-09 09:29:34.779580] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:57.144 [2024-12-09 09:29:34.779584] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:57.144 [2024-12-09 09:29:34.779588] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f3dbc0) on tqpair=0x1ed9750 00:18:57.144 [2024-12-09 09:29:34.779595] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:18:57.144 0% 00:18:57.144 Data Units Read: 0 00:18:57.144 Data Units Written: 0 00:18:57.144 Host Read Commands: 0 00:18:57.144 Host Write Commands: 0 00:18:57.144 Controller Busy Time: 0 minutes 00:18:57.144 Power Cycles: 0 00:18:57.144 Power On Hours: 0 hours 00:18:57.144 Unsafe Shutdowns: 0 00:18:57.144 Unrecoverable Media Errors: 0 00:18:57.144 Lifetime Error Log Entries: 0 00:18:57.144 Warning Temperature Time: 0 minutes 00:18:57.144 Critical Temperature Time: 0 minutes 00:18:57.144 00:18:57.144 Number of Queues 00:18:57.144 ================ 00:18:57.144 Number of I/O Submission Queues: 127 00:18:57.144 Number of I/O Completion Queues: 127 00:18:57.144 00:18:57.144 Active Namespaces 00:18:57.144 ================= 00:18:57.144 Namespace ID:1 00:18:57.144 Error Recovery Timeout: Unlimited 00:18:57.144 Command Set Identifier: NVM (00h) 00:18:57.144 Deallocate: Supported 00:18:57.144 Deallocated/Unwritten Error: Not Supported 00:18:57.144 Deallocated Read Value: Unknown 00:18:57.144 Deallocate in Write Zeroes: Not Supported 00:18:57.144 Deallocated Guard Field: 0xFFFF 00:18:57.144 Flush: Supported 00:18:57.144 Reservation: Supported 00:18:57.144 Namespace Sharing Capabilities: Multiple Controllers 00:18:57.144 Size (in LBAs): 131072 (0GiB) 00:18:57.144 Capacity (in LBAs): 131072 (0GiB) 00:18:57.144 Utilization (in LBAs): 131072 (0GiB) 00:18:57.144 NGUID: ABCDEF0123456789ABCDEF0123456789 00:18:57.144 EUI64: ABCDEF0123456789 00:18:57.144 UUID: be5c69be-0d57-4afb-800f-b6f2ce541b43 00:18:57.144 Thin Provisioning: Not Supported 00:18:57.144 Per-NS Atomic Units: Yes 00:18:57.144 Atomic Boundary Size (Normal): 0 00:18:57.144 Atomic Boundary Size (PFail): 0 00:18:57.144 Atomic Boundary Offset: 0 00:18:57.144 Maximum Single Source Range Length: 65535 00:18:57.144 Maximum Copy Length: 65535 00:18:57.144 Maximum Source Range Count: 1 00:18:57.144 NGUID/EUI64 Never Reused: No 00:18:57.144 Namespace Write Protected: No 00:18:57.144 Number of LBA Formats: 1 00:18:57.144 Current LBA Format: LBA Format #00 00:18:57.144 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:57.144 00:18:57.144 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:18:57.144 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:57.144 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.144 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:57.404 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.404 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:18:57.404 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:18:57.404 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:57.404 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:18:57.404 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:57.404 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:18:57.404 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:57.404 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:57.404 rmmod nvme_tcp 00:18:57.404 rmmod nvme_fabrics 00:18:57.404 rmmod nvme_keyring 00:18:57.404 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:57.404 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:18:57.404 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:18:57.404 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 73819 ']' 00:18:57.404 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 73819 00:18:57.404 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 73819 ']' 00:18:57.404 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 73819 00:18:57.404 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:18:57.404 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:57.404 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73819 00:18:57.404 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:57.404 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:57.404 killing process with pid 73819 00:18:57.404 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73819' 00:18:57.404 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 73819 00:18:57.404 09:29:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 73819 00:18:57.662 09:29:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:57.662 09:29:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:57.662 09:29:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:57.662 09:29:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:18:57.662 09:29:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:18:57.662 09:29:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:57.662 09:29:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:18:57.662 09:29:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:57.662 09:29:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:57.662 09:29:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:57.662 09:29:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:57.662 09:29:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:57.662 09:29:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:57.662 09:29:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:57.662 09:29:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:57.662 09:29:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:57.662 09:29:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:57.662 09:29:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:57.662 09:29:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:57.662 09:29:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:57.662 09:29:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:57.921 09:29:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:57.921 09:29:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:57.921 09:29:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:57.921 09:29:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:57.921 09:29:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:57.921 09:29:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:18:57.921 ************************************ 00:18:57.921 END TEST nvmf_identify 00:18:57.921 ************************************ 00:18:57.921 00:18:57.921 real 0m3.114s 00:18:57.921 user 0m7.161s 00:18:57.921 sys 0m0.986s 00:18:57.921 09:29:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:57.921 09:29:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:57.921 09:29:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:18:57.921 09:29:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:57.921 09:29:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:57.921 09:29:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.921 ************************************ 00:18:57.921 START TEST nvmf_perf 00:18:57.921 ************************************ 00:18:57.921 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:18:58.181 * Looking for test storage... 00:18:58.181 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:58.181 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:58.181 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:18:58.181 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:58.181 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:58.181 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:58.181 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:58.181 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:58.181 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:18:58.181 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:18:58.181 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:18:58.181 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:18:58.181 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:18:58.181 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:18:58.181 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:18:58.181 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:58.181 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:18:58.181 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:18:58.181 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:58.181 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:58.181 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:18:58.181 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:18:58.181 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:58.181 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:18:58.181 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:18:58.181 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:18:58.181 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:18:58.181 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:58.181 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:18:58.181 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:18:58.181 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:58.181 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:58.181 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:18:58.181 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:58.181 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:58.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.181 --rc genhtml_branch_coverage=1 00:18:58.181 --rc genhtml_function_coverage=1 00:18:58.181 --rc genhtml_legend=1 00:18:58.181 --rc geninfo_all_blocks=1 00:18:58.181 --rc geninfo_unexecuted_blocks=1 00:18:58.181 00:18:58.181 ' 00:18:58.181 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:58.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.182 --rc genhtml_branch_coverage=1 00:18:58.182 --rc genhtml_function_coverage=1 00:18:58.182 --rc genhtml_legend=1 00:18:58.182 --rc geninfo_all_blocks=1 00:18:58.182 --rc geninfo_unexecuted_blocks=1 00:18:58.182 00:18:58.182 ' 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:58.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.182 --rc genhtml_branch_coverage=1 00:18:58.182 --rc genhtml_function_coverage=1 00:18:58.182 --rc genhtml_legend=1 00:18:58.182 --rc geninfo_all_blocks=1 00:18:58.182 --rc geninfo_unexecuted_blocks=1 00:18:58.182 00:18:58.182 ' 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:58.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.182 --rc genhtml_branch_coverage=1 00:18:58.182 --rc genhtml_function_coverage=1 00:18:58.182 --rc genhtml_legend=1 00:18:58.182 --rc geninfo_all_blocks=1 00:18:58.182 --rc geninfo_unexecuted_blocks=1 00:18:58.182 00:18:58.182 ' 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:58.182 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:58.182 Cannot find device "nvmf_init_br" 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:58.182 Cannot find device "nvmf_init_br2" 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:58.182 Cannot find device "nvmf_tgt_br" 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:58.182 Cannot find device "nvmf_tgt_br2" 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:58.182 Cannot find device "nvmf_init_br" 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:58.182 Cannot find device "nvmf_init_br2" 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:18:58.182 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:58.442 Cannot find device "nvmf_tgt_br" 00:18:58.442 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:18:58.442 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:58.442 Cannot find device "nvmf_tgt_br2" 00:18:58.442 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:18:58.442 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:58.442 Cannot find device "nvmf_br" 00:18:58.442 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:18:58.442 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:58.442 Cannot find device "nvmf_init_if" 00:18:58.442 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:18:58.442 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:58.442 Cannot find device "nvmf_init_if2" 00:18:58.442 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:18:58.442 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:58.442 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:58.442 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:18:58.442 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:58.442 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:58.442 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:18:58.442 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:58.442 09:29:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:58.442 09:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:58.442 09:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:58.442 09:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:58.442 09:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:58.442 09:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:58.442 09:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:58.442 09:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:58.442 09:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:58.442 09:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:58.442 09:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:58.442 09:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:58.442 09:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:58.442 09:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:58.442 09:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:58.442 09:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:58.442 09:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:58.442 09:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:58.442 09:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:58.702 09:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:58.702 09:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:58.702 09:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:58.702 09:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:58.702 09:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:58.702 09:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:58.702 09:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:58.702 09:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:58.702 09:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:58.702 09:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:58.702 09:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:58.702 09:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:58.702 09:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:58.702 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:58.702 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:18:58.702 00:18:58.702 --- 10.0.0.3 ping statistics --- 00:18:58.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:58.702 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:18:58.702 09:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:58.702 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:58.702 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:18:58.702 00:18:58.702 --- 10.0.0.4 ping statistics --- 00:18:58.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:58.702 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:18:58.702 09:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:58.702 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:58.702 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:18:58.702 00:18:58.702 --- 10.0.0.1 ping statistics --- 00:18:58.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:58.702 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:18:58.702 09:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:58.702 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:58.702 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.113 ms 00:18:58.702 00:18:58.702 --- 10.0.0.2 ping statistics --- 00:18:58.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:58.702 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:18:58.702 09:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:58.702 09:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:18:58.702 09:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:58.702 09:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:58.702 09:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:58.702 09:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:58.702 09:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:58.702 09:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:58.702 09:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:58.702 09:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:18:58.702 09:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:58.702 09:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:58.702 09:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:58.702 09:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=74083 00:18:58.702 09:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:58.702 09:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 74083 00:18:58.702 09:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 74083 ']' 00:18:58.702 09:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:58.702 09:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:58.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:58.702 09:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:58.702 09:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:58.702 09:29:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:58.702 [2024-12-09 09:29:36.369264] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:18:58.702 [2024-12-09 09:29:36.369325] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:58.962 [2024-12-09 09:29:36.522615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:58.962 [2024-12-09 09:29:36.571998] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:58.962 [2024-12-09 09:29:36.572041] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:58.962 [2024-12-09 09:29:36.572050] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:58.962 [2024-12-09 09:29:36.572058] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:58.962 [2024-12-09 09:29:36.572065] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:58.962 [2024-12-09 09:29:36.572927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:58.962 [2024-12-09 09:29:36.573448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:58.962 [2024-12-09 09:29:36.573580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.962 [2024-12-09 09:29:36.573533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:58.962 [2024-12-09 09:29:36.615185] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:59.898 09:29:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:59.898 09:29:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:18:59.898 09:29:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:59.898 09:29:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:59.898 09:29:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:59.898 09:29:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:59.898 09:29:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:59.898 09:29:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:19:00.157 09:29:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:19:00.157 09:29:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:19:00.447 09:29:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:19:00.447 09:29:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:00.708 09:29:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:19:00.708 09:29:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:19:00.708 09:29:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:19:00.708 09:29:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:19:00.708 09:29:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:00.708 [2024-12-09 09:29:38.413895] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:00.966 09:29:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:00.966 09:29:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:19:00.966 09:29:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:01.225 09:29:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:19:01.225 09:29:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:19:01.483 09:29:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:01.742 [2024-12-09 09:29:39.269813] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:01.742 09:29:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:19:02.000 09:29:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:19:02.000 09:29:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:19:02.000 09:29:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:19:02.000 09:29:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:19:02.935 Initializing NVMe Controllers 00:19:02.935 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:19:02.935 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:19:02.935 Initialization complete. Launching workers. 00:19:02.935 ======================================================== 00:19:02.935 Latency(us) 00:19:02.935 Device Information : IOPS MiB/s Average min max 00:19:02.935 PCIE (0000:00:10.0) NSID 1 from core 0: 18222.00 71.18 1756.60 523.40 10277.02 00:19:02.935 ======================================================== 00:19:02.935 Total : 18222.00 71.18 1756.60 523.40 10277.02 00:19:02.935 00:19:02.935 09:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:19:04.308 Initializing NVMe Controllers 00:19:04.308 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:19:04.308 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:04.308 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:04.308 Initialization complete. Launching workers. 00:19:04.308 ======================================================== 00:19:04.308 Latency(us) 00:19:04.308 Device Information : IOPS MiB/s Average min max 00:19:04.308 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4589.00 17.93 217.66 75.48 7122.38 00:19:04.308 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 126.00 0.49 7988.54 5975.11 14923.85 00:19:04.308 ======================================================== 00:19:04.308 Total : 4715.00 18.42 425.33 75.48 14923.85 00:19:04.308 00:19:04.308 09:29:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:19:05.687 Initializing NVMe Controllers 00:19:05.687 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:19:05.687 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:05.687 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:05.687 Initialization complete. Launching workers. 00:19:05.687 ======================================================== 00:19:05.687 Latency(us) 00:19:05.687 Device Information : IOPS MiB/s Average min max 00:19:05.687 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11316.03 44.20 2827.98 556.02 9368.85 00:19:05.687 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3991.48 15.59 8043.62 4467.15 16482.90 00:19:05.687 ======================================================== 00:19:05.687 Total : 15307.51 59.79 4187.98 556.02 16482.90 00:19:05.687 00:19:05.687 09:29:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:19:05.687 09:29:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:19:08.227 Initializing NVMe Controllers 00:19:08.227 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:19:08.227 Controller IO queue size 128, less than required. 00:19:08.227 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:08.227 Controller IO queue size 128, less than required. 00:19:08.227 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:08.228 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:08.228 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:08.228 Initialization complete. Launching workers. 00:19:08.228 ======================================================== 00:19:08.228 Latency(us) 00:19:08.228 Device Information : IOPS MiB/s Average min max 00:19:08.228 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2094.49 523.62 61998.07 31812.28 100435.57 00:19:08.228 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 675.00 168.75 193690.08 29297.08 305957.18 00:19:08.228 ======================================================== 00:19:08.228 Total : 2769.49 692.37 94094.88 29297.08 305957.18 00:19:08.228 00:19:08.228 09:29:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:19:08.487 Initializing NVMe Controllers 00:19:08.487 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:19:08.487 Controller IO queue size 128, less than required. 00:19:08.487 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:08.487 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:19:08.487 Controller IO queue size 128, less than required. 00:19:08.487 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:08.487 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:19:08.487 WARNING: Some requested NVMe devices were skipped 00:19:08.487 No valid NVMe controllers or AIO or URING devices found 00:19:08.487 09:29:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:19:11.020 Initializing NVMe Controllers 00:19:11.020 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:19:11.020 Controller IO queue size 128, less than required. 00:19:11.020 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:11.020 Controller IO queue size 128, less than required. 00:19:11.020 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:11.020 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:11.020 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:11.020 Initialization complete. Launching workers. 00:19:11.020 00:19:11.020 ==================== 00:19:11.020 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:19:11.020 TCP transport: 00:19:11.020 polls: 12790 00:19:11.020 idle_polls: 8713 00:19:11.020 sock_completions: 4077 00:19:11.020 nvme_completions: 7489 00:19:11.020 submitted_requests: 11128 00:19:11.020 queued_requests: 1 00:19:11.020 00:19:11.020 ==================== 00:19:11.020 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:19:11.020 TCP transport: 00:19:11.020 polls: 15349 00:19:11.020 idle_polls: 9926 00:19:11.020 sock_completions: 5423 00:19:11.020 nvme_completions: 8301 00:19:11.020 submitted_requests: 12498 00:19:11.020 queued_requests: 1 00:19:11.020 ======================================================== 00:19:11.020 Latency(us) 00:19:11.020 Device Information : IOPS MiB/s Average min max 00:19:11.020 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1867.94 466.99 69097.15 37040.48 111550.71 00:19:11.020 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2070.50 517.63 62538.93 29368.79 101155.07 00:19:11.020 ======================================================== 00:19:11.020 Total : 3938.45 984.61 65649.39 29368.79 111550.71 00:19:11.020 00:19:11.020 09:29:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:19:11.278 09:29:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:11.278 09:29:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:19:11.278 09:29:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:19:11.278 09:29:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:19:11.278 09:29:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:11.278 09:29:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:19:11.278 09:29:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:11.278 09:29:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:19:11.278 09:29:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:11.278 09:29:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:11.278 rmmod nvme_tcp 00:19:11.279 rmmod nvme_fabrics 00:19:11.576 rmmod nvme_keyring 00:19:11.576 09:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:11.576 09:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:19:11.576 09:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:19:11.576 09:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 74083 ']' 00:19:11.576 09:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 74083 00:19:11.576 09:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 74083 ']' 00:19:11.576 09:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 74083 00:19:11.576 09:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:19:11.576 09:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:11.577 09:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74083 00:19:11.577 killing process with pid 74083 00:19:11.577 09:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:11.577 09:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:11.577 09:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74083' 00:19:11.577 09:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 74083 00:19:11.577 09:29:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 74083 00:19:12.952 09:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:12.952 09:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:12.952 09:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:12.952 09:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:19:12.952 09:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:19:12.952 09:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:12.952 09:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:19:12.952 09:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:12.952 09:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:12.952 09:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:12.952 09:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:12.952 09:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:12.952 09:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:12.952 09:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:12.952 09:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:12.952 09:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:12.952 09:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:12.952 09:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:12.952 09:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:12.952 09:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:12.952 09:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:12.952 09:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:12.952 09:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:12.952 09:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:12.952 09:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:12.952 09:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:12.952 09:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:19:12.952 00:19:12.952 real 0m15.144s 00:19:12.952 user 0m53.360s 00:19:12.952 sys 0m4.603s 00:19:12.952 09:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:12.952 09:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:12.952 ************************************ 00:19:13.210 END TEST nvmf_perf 00:19:13.210 ************************************ 00:19:13.210 09:29:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:19:13.210 09:29:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:13.210 09:29:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:13.210 09:29:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.210 ************************************ 00:19:13.210 START TEST nvmf_fio_host 00:19:13.210 ************************************ 00:19:13.210 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:19:13.210 * Looking for test storage... 00:19:13.210 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:13.210 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:13.210 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:19:13.210 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:13.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.469 --rc genhtml_branch_coverage=1 00:19:13.469 --rc genhtml_function_coverage=1 00:19:13.469 --rc genhtml_legend=1 00:19:13.469 --rc geninfo_all_blocks=1 00:19:13.469 --rc geninfo_unexecuted_blocks=1 00:19:13.469 00:19:13.469 ' 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:13.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.469 --rc genhtml_branch_coverage=1 00:19:13.469 --rc genhtml_function_coverage=1 00:19:13.469 --rc genhtml_legend=1 00:19:13.469 --rc geninfo_all_blocks=1 00:19:13.469 --rc geninfo_unexecuted_blocks=1 00:19:13.469 00:19:13.469 ' 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:13.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.469 --rc genhtml_branch_coverage=1 00:19:13.469 --rc genhtml_function_coverage=1 00:19:13.469 --rc genhtml_legend=1 00:19:13.469 --rc geninfo_all_blocks=1 00:19:13.469 --rc geninfo_unexecuted_blocks=1 00:19:13.469 00:19:13.469 ' 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:13.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.469 --rc genhtml_branch_coverage=1 00:19:13.469 --rc genhtml_function_coverage=1 00:19:13.469 --rc genhtml_legend=1 00:19:13.469 --rc geninfo_all_blocks=1 00:19:13.469 --rc geninfo_unexecuted_blocks=1 00:19:13.469 00:19:13.469 ' 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:13.469 09:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:13.469 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:13.469 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:13.469 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:13.469 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:19:13.469 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:13.469 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:13.469 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:13.469 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:13.470 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:13.470 Cannot find device "nvmf_init_br" 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:13.470 Cannot find device "nvmf_init_br2" 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:13.470 Cannot find device "nvmf_tgt_br" 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:13.470 Cannot find device "nvmf_tgt_br2" 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:13.470 Cannot find device "nvmf_init_br" 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:13.470 Cannot find device "nvmf_init_br2" 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:13.470 Cannot find device "nvmf_tgt_br" 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:13.470 Cannot find device "nvmf_tgt_br2" 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:13.470 Cannot find device "nvmf_br" 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:19:13.470 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:13.728 Cannot find device "nvmf_init_if" 00:19:13.728 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:19:13.728 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:13.728 Cannot find device "nvmf_init_if2" 00:19:13.728 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:19:13.728 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:13.728 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:13.728 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:19:13.728 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:13.728 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:13.728 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:19:13.728 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:13.728 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:13.728 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:13.728 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:13.728 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:13.728 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:13.728 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:13.728 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:13.728 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:13.728 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:13.728 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:13.728 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:13.728 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:13.728 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:13.728 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:13.728 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:13.728 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:13.728 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:13.728 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:13.728 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:13.728 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:13.728 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:13.728 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:13.986 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:13.986 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:13.986 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:13.986 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:13.986 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:13.986 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:13.986 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:13.986 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:13.986 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:13.986 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:13.986 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:13.986 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:19:13.986 00:19:13.986 --- 10.0.0.3 ping statistics --- 00:19:13.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:13.986 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:19:13.986 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:13.986 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:13.986 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.072 ms 00:19:13.986 00:19:13.986 --- 10.0.0.4 ping statistics --- 00:19:13.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:13.986 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:19:13.986 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:13.986 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:13.986 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:19:13.986 00:19:13.986 --- 10.0.0.1 ping statistics --- 00:19:13.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:13.986 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:19:13.986 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:13.986 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:13.986 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:19:13.986 00:19:13.986 --- 10.0.0.2 ping statistics --- 00:19:13.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:13.986 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:19:13.986 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:13.986 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:19:13.986 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:13.986 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:13.986 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:13.986 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:13.986 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:13.986 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:13.986 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:13.986 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:19:13.986 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:19:13.986 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:13.986 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.986 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=74549 00:19:13.986 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:13.986 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:13.986 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 74549 00:19:13.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:13.986 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 74549 ']' 00:19:13.986 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:13.986 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:13.986 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:13.986 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:13.986 09:29:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.986 [2024-12-09 09:29:51.679634] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:19:13.986 [2024-12-09 09:29:51.679973] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:14.245 [2024-12-09 09:29:51.847336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:14.245 [2024-12-09 09:29:51.903144] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:14.245 [2024-12-09 09:29:51.903410] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:14.245 [2024-12-09 09:29:51.903430] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:14.245 [2024-12-09 09:29:51.903440] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:14.245 [2024-12-09 09:29:51.903447] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:14.245 [2024-12-09 09:29:51.904706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:14.245 [2024-12-09 09:29:51.904876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:14.245 [2024-12-09 09:29:51.904964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:14.245 [2024-12-09 09:29:51.904965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:14.245 [2024-12-09 09:29:51.949510] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:15.181 09:29:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:15.181 09:29:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:19:15.181 09:29:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:15.181 [2024-12-09 09:29:52.770873] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:15.181 09:29:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:19:15.181 09:29:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:15.181 09:29:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.181 09:29:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:15.440 Malloc1 00:19:15.440 09:29:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:15.698 09:29:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:15.956 09:29:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:16.215 [2024-12-09 09:29:53.705876] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:16.215 09:29:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:19:16.473 09:29:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:19:16.473 09:29:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:16.473 09:29:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:16.473 09:29:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:16.473 09:29:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:16.473 09:29:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:16.473 09:29:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:16.473 09:29:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:19:16.473 09:29:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:16.473 09:29:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:16.473 09:29:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:16.473 09:29:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:19:16.473 09:29:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:16.473 09:29:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:16.473 09:29:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:16.473 09:29:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:16.473 09:29:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:16.473 09:29:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:19:16.473 09:29:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:16.473 09:29:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:16.473 09:29:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:16.473 09:29:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:19:16.473 09:29:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:16.473 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:19:16.473 fio-3.35 00:19:16.473 Starting 1 thread 00:19:19.004 00:19:19.004 test: (groupid=0, jobs=1): err= 0: pid=74627: Mon Dec 9 09:29:56 2024 00:19:19.004 read: IOPS=10.8k, BW=42.3MiB/s (44.4MB/s)(84.9MiB/2006msec) 00:19:19.004 slat (nsec): min=1570, max=375780, avg=1902.85, stdev=3251.29 00:19:19.004 clat (usec): min=3042, max=10360, avg=6166.88, stdev=499.97 00:19:19.004 lat (usec): min=3104, max=10362, avg=6168.79, stdev=500.05 00:19:19.004 clat percentiles (usec): 00:19:19.004 | 1.00th=[ 4752], 5.00th=[ 5538], 10.00th=[ 5669], 20.00th=[ 5866], 00:19:19.004 | 30.00th=[ 5997], 40.00th=[ 6063], 50.00th=[ 6128], 60.00th=[ 6259], 00:19:19.004 | 70.00th=[ 6390], 80.00th=[ 6456], 90.00th=[ 6652], 95.00th=[ 6783], 00:19:19.004 | 99.00th=[ 7570], 99.50th=[ 8717], 99.90th=[ 9765], 99.95th=[10028], 00:19:19.004 | 99.99th=[10290] 00:19:19.004 bw ( KiB/s): min=42408, max=43968, per=100.00%, avg=43348.00, stdev=676.51, samples=4 00:19:19.004 iops : min=10602, max=10992, avg=10837.00, stdev=169.13, samples=4 00:19:19.004 write: IOPS=10.8k, BW=42.2MiB/s (44.3MB/s)(84.7MiB/2006msec); 0 zone resets 00:19:19.004 slat (nsec): min=1600, max=271141, avg=1975.28, stdev=2139.86 00:19:19.004 clat (usec): min=2860, max=10291, avg=5599.42, stdev=478.31 00:19:19.004 lat (usec): min=2875, max=10293, avg=5601.39, stdev=478.50 00:19:19.004 clat percentiles (usec): 00:19:19.004 | 1.00th=[ 4015], 5.00th=[ 5014], 10.00th=[ 5145], 20.00th=[ 5342], 00:19:19.004 | 30.00th=[ 5407], 40.00th=[ 5538], 50.00th=[ 5604], 60.00th=[ 5669], 00:19:19.004 | 70.00th=[ 5735], 80.00th=[ 5866], 90.00th=[ 5997], 95.00th=[ 6194], 00:19:19.004 | 99.00th=[ 7046], 99.50th=[ 8291], 99.90th=[ 9372], 99.95th=[ 9634], 00:19:19.004 | 99.99th=[10159] 00:19:19.004 bw ( KiB/s): min=42760, max=43624, per=99.99%, avg=43248.00, stdev=359.56, samples=4 00:19:19.004 iops : min=10690, max=10906, avg=10812.00, stdev=89.89, samples=4 00:19:19.004 lat (msec) : 4=0.55%, 10=99.40%, 20=0.05% 00:19:19.004 cpu : usr=69.43%, sys=24.14%, ctx=83, majf=0, minf=7 00:19:19.004 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:19:19.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.004 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:19.004 issued rwts: total=21735,21690,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.004 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:19.004 00:19:19.004 Run status group 0 (all jobs): 00:19:19.004 READ: bw=42.3MiB/s (44.4MB/s), 42.3MiB/s-42.3MiB/s (44.4MB/s-44.4MB/s), io=84.9MiB (89.0MB), run=2006-2006msec 00:19:19.004 WRITE: bw=42.2MiB/s (44.3MB/s), 42.2MiB/s-42.2MiB/s (44.3MB/s-44.3MB/s), io=84.7MiB (88.8MB), run=2006-2006msec 00:19:19.004 09:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:19:19.004 09:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:19:19.004 09:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:19.004 09:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:19.004 09:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:19.004 09:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:19.004 09:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:19:19.004 09:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:19.004 09:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:19.004 09:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:19.004 09:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:19:19.004 09:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:19.004 09:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:19.004 09:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:19.004 09:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:19.004 09:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:19.004 09:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:19:19.004 09:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:19.004 09:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:19.004 09:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:19.004 09:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:19:19.004 09:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:19:19.004 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:19:19.004 fio-3.35 00:19:19.004 Starting 1 thread 00:19:21.539 00:19:21.539 test: (groupid=0, jobs=1): err= 0: pid=74670: Mon Dec 9 09:29:58 2024 00:19:21.539 read: IOPS=10.7k, BW=168MiB/s (176MB/s)(336MiB/2006msec) 00:19:21.539 slat (nsec): min=2482, max=94142, avg=2830.00, stdev=1524.70 00:19:21.539 clat (usec): min=1571, max=20820, avg=6616.79, stdev=2136.28 00:19:21.539 lat (usec): min=1573, max=20831, avg=6619.62, stdev=2136.51 00:19:21.539 clat percentiles (usec): 00:19:21.539 | 1.00th=[ 3097], 5.00th=[ 3720], 10.00th=[ 4146], 20.00th=[ 4817], 00:19:21.539 | 30.00th=[ 5342], 40.00th=[ 5866], 50.00th=[ 6390], 60.00th=[ 6849], 00:19:21.539 | 70.00th=[ 7504], 80.00th=[ 8160], 90.00th=[ 9372], 95.00th=[10421], 00:19:21.539 | 99.00th=[12518], 99.50th=[14615], 99.90th=[18482], 99.95th=[20579], 00:19:21.539 | 99.99th=[20841] 00:19:21.539 bw ( KiB/s): min=83168, max=94208, per=50.12%, avg=86056.00, stdev=5436.26, samples=4 00:19:21.539 iops : min= 5198, max= 5888, avg=5378.50, stdev=339.77, samples=4 00:19:21.539 write: IOPS=6142, BW=96.0MiB/s (101MB/s)(176MiB/1832msec); 0 zone resets 00:19:21.539 slat (usec): min=28, max=442, avg=31.19, stdev= 9.91 00:19:21.539 clat (usec): min=3233, max=24942, avg=9344.71, stdev=2218.27 00:19:21.539 lat (usec): min=3262, max=24971, avg=9375.89, stdev=2222.04 00:19:21.539 clat percentiles (usec): 00:19:21.539 | 1.00th=[ 5932], 5.00th=[ 6718], 10.00th=[ 7177], 20.00th=[ 7767], 00:19:21.539 | 30.00th=[ 8160], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9372], 00:19:21.539 | 70.00th=[10028], 80.00th=[10814], 90.00th=[11731], 95.00th=[12649], 00:19:21.539 | 99.00th=[19268], 99.50th=[21627], 99.90th=[24773], 99.95th=[24773], 00:19:21.539 | 99.99th=[25035] 00:19:21.539 bw ( KiB/s): min=85856, max=98304, per=90.97%, avg=89400.00, stdev=5950.19, samples=4 00:19:21.539 iops : min= 5366, max= 6144, avg=5587.50, stdev=371.89, samples=4 00:19:21.539 lat (msec) : 2=0.06%, 4=5.28%, 10=79.96%, 20=14.41%, 50=0.29% 00:19:21.539 cpu : usr=81.26%, sys=14.71%, ctx=15, majf=0, minf=6 00:19:21.539 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:19:21.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.539 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:21.539 issued rwts: total=21527,11253,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.539 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.539 00:19:21.539 Run status group 0 (all jobs): 00:19:21.539 READ: bw=168MiB/s (176MB/s), 168MiB/s-168MiB/s (176MB/s-176MB/s), io=336MiB (353MB), run=2006-2006msec 00:19:21.539 WRITE: bw=96.0MiB/s (101MB/s), 96.0MiB/s-96.0MiB/s (101MB/s-101MB/s), io=176MiB (184MB), run=1832-1832msec 00:19:21.539 09:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:21.799 09:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:19:21.799 09:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:19:21.799 09:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:19:21.799 09:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:19:21.799 09:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:21.799 09:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:19:21.799 09:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:21.799 09:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:19:21.799 09:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:21.799 09:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:21.799 rmmod nvme_tcp 00:19:21.799 rmmod nvme_fabrics 00:19:21.799 rmmod nvme_keyring 00:19:21.799 09:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:21.799 09:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:19:21.799 09:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:19:21.799 09:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 74549 ']' 00:19:21.799 09:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 74549 00:19:21.799 09:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 74549 ']' 00:19:21.799 09:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 74549 00:19:21.799 09:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:19:21.799 09:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:21.799 09:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74549 00:19:21.799 killing process with pid 74549 00:19:21.799 09:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:21.799 09:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:21.799 09:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74549' 00:19:21.799 09:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 74549 00:19:21.799 09:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 74549 00:19:22.058 09:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:22.058 09:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:22.058 09:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:22.058 09:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:19:22.058 09:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:19:22.058 09:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:22.058 09:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:19:22.058 09:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:22.058 09:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:22.058 09:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:22.058 09:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:22.058 09:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:22.058 09:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:22.058 09:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:22.058 09:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:22.058 09:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:22.058 09:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:22.058 09:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:22.317 09:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:22.317 09:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:22.317 09:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:22.317 09:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:22.317 09:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:22.317 09:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:22.317 09:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:22.317 09:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:22.317 09:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:19:22.317 00:19:22.317 real 0m9.197s 00:19:22.317 user 0m35.053s 00:19:22.317 sys 0m2.854s 00:19:22.317 09:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:22.317 ************************************ 00:19:22.317 END TEST nvmf_fio_host 00:19:22.317 09:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.317 ************************************ 00:19:22.317 09:29:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:19:22.317 09:29:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:22.317 09:29:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:22.317 09:29:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.317 ************************************ 00:19:22.317 START TEST nvmf_failover 00:19:22.317 ************************************ 00:19:22.317 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:19:22.581 * Looking for test storage... 00:19:22.581 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:22.581 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:22.581 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:19:22.581 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:22.581 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:22.581 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:22.581 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:22.581 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:22.581 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:19:22.581 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:19:22.581 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:19:22.581 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:19:22.581 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:19:22.581 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:19:22.581 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:19:22.581 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:22.581 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:19:22.581 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:22.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.582 --rc genhtml_branch_coverage=1 00:19:22.582 --rc genhtml_function_coverage=1 00:19:22.582 --rc genhtml_legend=1 00:19:22.582 --rc geninfo_all_blocks=1 00:19:22.582 --rc geninfo_unexecuted_blocks=1 00:19:22.582 00:19:22.582 ' 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:22.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.582 --rc genhtml_branch_coverage=1 00:19:22.582 --rc genhtml_function_coverage=1 00:19:22.582 --rc genhtml_legend=1 00:19:22.582 --rc geninfo_all_blocks=1 00:19:22.582 --rc geninfo_unexecuted_blocks=1 00:19:22.582 00:19:22.582 ' 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:22.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.582 --rc genhtml_branch_coverage=1 00:19:22.582 --rc genhtml_function_coverage=1 00:19:22.582 --rc genhtml_legend=1 00:19:22.582 --rc geninfo_all_blocks=1 00:19:22.582 --rc geninfo_unexecuted_blocks=1 00:19:22.582 00:19:22.582 ' 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:22.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.582 --rc genhtml_branch_coverage=1 00:19:22.582 --rc genhtml_function_coverage=1 00:19:22.582 --rc genhtml_legend=1 00:19:22.582 --rc geninfo_all_blocks=1 00:19:22.582 --rc geninfo_unexecuted_blocks=1 00:19:22.582 00:19:22.582 ' 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:22.582 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:22.582 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:22.583 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:22.583 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:22.583 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:22.583 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:22.583 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:22.583 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:22.583 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:22.583 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:22.583 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:22.583 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:22.583 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:22.583 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:22.583 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:22.583 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:22.855 Cannot find device "nvmf_init_br" 00:19:22.855 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:19:22.855 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:22.855 Cannot find device "nvmf_init_br2" 00:19:22.855 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:19:22.855 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:22.855 Cannot find device "nvmf_tgt_br" 00:19:22.855 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:19:22.855 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:22.855 Cannot find device "nvmf_tgt_br2" 00:19:22.855 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:19:22.855 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:22.855 Cannot find device "nvmf_init_br" 00:19:22.855 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:19:22.855 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:22.855 Cannot find device "nvmf_init_br2" 00:19:22.855 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:19:22.855 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:22.855 Cannot find device "nvmf_tgt_br" 00:19:22.855 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:19:22.855 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:22.855 Cannot find device "nvmf_tgt_br2" 00:19:22.855 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:19:22.855 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:22.855 Cannot find device "nvmf_br" 00:19:22.855 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:19:22.855 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:22.855 Cannot find device "nvmf_init_if" 00:19:22.855 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:19:22.855 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:22.855 Cannot find device "nvmf_init_if2" 00:19:22.855 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:19:22.855 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:22.855 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:22.855 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:19:22.855 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:22.855 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:22.855 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:19:22.855 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:22.855 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:22.855 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:22.855 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:22.855 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:22.855 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:22.855 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:22.855 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:22.855 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:23.112 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:23.112 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:23.112 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:23.112 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:23.112 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:23.112 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:23.112 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:23.112 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:23.112 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:23.112 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:23.112 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:23.112 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:23.112 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:23.112 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:23.112 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:23.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:23.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:23.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:23.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:23.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:23.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:23.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:23.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:23.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:23.113 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:23.113 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.137 ms 00:19:23.113 00:19:23.113 --- 10.0.0.3 ping statistics --- 00:19:23.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.113 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:19:23.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:23.113 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:23.113 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:19:23.113 00:19:23.113 --- 10.0.0.4 ping statistics --- 00:19:23.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.113 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:19:23.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:23.113 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:23.113 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:19:23.113 00:19:23.113 --- 10.0.0.1 ping statistics --- 00:19:23.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.113 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:19:23.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:23.113 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:23.113 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:19:23.113 00:19:23.113 --- 10.0.0.2 ping statistics --- 00:19:23.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.113 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:19:23.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:23.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:19:23.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:23.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:23.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:23.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:23.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:23.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:23.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:23.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:19:23.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:23.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:23.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:23.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=74944 00:19:23.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:23.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 74944 00:19:23.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 74944 ']' 00:19:23.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:23.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:23.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:23.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:23.370 [2024-12-09 09:30:00.870790] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:19:23.370 [2024-12-09 09:30:00.871251] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:23.370 [2024-12-09 09:30:01.022804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:23.370 [2024-12-09 09:30:01.066271] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:23.370 [2024-12-09 09:30:01.066516] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:23.370 [2024-12-09 09:30:01.066535] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:23.370 [2024-12-09 09:30:01.066544] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:23.370 [2024-12-09 09:30:01.066551] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:23.370 [2024-12-09 09:30:01.067502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:23.370 [2024-12-09 09:30:01.067667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:23.370 [2024-12-09 09:30:01.067674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:23.628 [2024-12-09 09:30:01.111265] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:24.193 09:30:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:24.193 09:30:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:19:24.193 09:30:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:24.193 09:30:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:24.193 09:30:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:24.193 09:30:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:24.193 09:30:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:24.450 [2024-12-09 09:30:01.982211] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:24.450 09:30:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:24.707 Malloc0 00:19:24.707 09:30:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:24.974 09:30:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:24.974 09:30:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:25.232 [2024-12-09 09:30:02.818367] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:25.232 09:30:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:25.492 [2024-12-09 09:30:03.010338] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:25.492 09:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:19:25.752 [2024-12-09 09:30:03.226343] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:19:25.752 09:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=74996 00:19:25.752 09:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:25.752 09:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 74996 /var/tmp/bdevperf.sock 00:19:25.752 09:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 74996 ']' 00:19:25.752 09:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:25.752 09:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:25.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:25.752 09:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:25.752 09:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:25.752 09:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:19:25.752 09:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:26.687 09:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:26.687 09:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:19:26.687 09:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:26.945 NVMe0n1 00:19:26.945 09:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:27.203 00:19:27.203 09:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75020 00:19:27.203 09:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:27.203 09:30:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:19:28.136 09:30:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:28.395 09:30:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:19:31.675 09:30:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:31.675 00:19:31.675 09:30:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:31.932 09:30:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:19:35.217 09:30:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:35.217 [2024-12-09 09:30:12.710418] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:35.217 09:30:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:19:36.155 09:30:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:19:36.415 09:30:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 75020 00:19:42.986 { 00:19:42.986 "results": [ 00:19:42.986 { 00:19:42.986 "job": "NVMe0n1", 00:19:42.986 "core_mask": "0x1", 00:19:42.986 "workload": "verify", 00:19:42.986 "status": "finished", 00:19:42.986 "verify_range": { 00:19:42.986 "start": 0, 00:19:42.986 "length": 16384 00:19:42.986 }, 00:19:42.986 "queue_depth": 128, 00:19:42.986 "io_size": 4096, 00:19:42.986 "runtime": 15.008194, 00:19:42.986 "iops": 11103.467878946662, 00:19:42.986 "mibps": 43.3729214021354, 00:19:42.986 "io_failed": 4381, 00:19:42.986 "io_timeout": 0, 00:19:42.986 "avg_latency_us": 11208.78068531963, 00:19:42.986 "min_latency_us": 450.7244979919679, 00:19:42.986 "max_latency_us": 13423.036144578313 00:19:42.986 } 00:19:42.986 ], 00:19:42.986 "core_count": 1 00:19:42.986 } 00:19:42.986 09:30:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 74996 00:19:42.986 09:30:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 74996 ']' 00:19:42.986 09:30:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 74996 00:19:42.986 09:30:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:19:42.986 09:30:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:42.986 09:30:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74996 00:19:42.986 killing process with pid 74996 00:19:42.986 09:30:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:42.986 09:30:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:42.986 09:30:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74996' 00:19:42.986 09:30:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 74996 00:19:42.986 09:30:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 74996 00:19:42.986 09:30:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:42.986 [2024-12-09 09:30:03.295426] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:19:42.986 [2024-12-09 09:30:03.295546] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74996 ] 00:19:42.986 [2024-12-09 09:30:03.447609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.986 [2024-12-09 09:30:03.499153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.986 [2024-12-09 09:30:03.542688] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:42.986 Running I/O for 15 seconds... 00:19:42.986 10576.00 IOPS, 41.31 MiB/s [2024-12-09T09:30:20.709Z] [2024-12-09 09:30:05.978727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:95488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.986 [2024-12-09 09:30:05.978789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.986 [2024-12-09 09:30:05.978814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:95496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.986 [2024-12-09 09:30:05.978830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.986 [2024-12-09 09:30:05.978848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:95504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.986 [2024-12-09 09:30:05.978867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.986 [2024-12-09 09:30:05.978884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:95512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.986 [2024-12-09 09:30:05.978898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.986 [2024-12-09 09:30:05.978914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:94848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.986 [2024-12-09 09:30:05.978929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.986 [2024-12-09 09:30:05.978945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:94856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.986 [2024-12-09 09:30:05.978959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.986 [2024-12-09 09:30:05.978975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:94864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.986 [2024-12-09 09:30:05.978989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.986 [2024-12-09 09:30:05.979005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:94872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.986 [2024-12-09 09:30:05.979020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.986 [2024-12-09 09:30:05.979035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:94880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.986 [2024-12-09 09:30:05.979049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.986 [2024-12-09 09:30:05.979065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:94888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.986 [2024-12-09 09:30:05.979079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.986 [2024-12-09 09:30:05.979094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:94896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.986 [2024-12-09 09:30:05.979133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.986 [2024-12-09 09:30:05.979149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.986 [2024-12-09 09:30:05.979163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.986 [2024-12-09 09:30:05.979178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:94912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.986 [2024-12-09 09:30:05.979193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.986 [2024-12-09 09:30:05.979209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:94920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.986 [2024-12-09 09:30:05.979223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.986 [2024-12-09 09:30:05.979239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:94928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.986 [2024-12-09 09:30:05.979253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.986 [2024-12-09 09:30:05.979268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:94936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.986 [2024-12-09 09:30:05.979283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.986 [2024-12-09 09:30:05.979299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:94944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.986 [2024-12-09 09:30:05.979313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.986 [2024-12-09 09:30:05.979329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:94952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.986 [2024-12-09 09:30:05.979343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.986 [2024-12-09 09:30:05.979358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:94960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.986 [2024-12-09 09:30:05.979372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.986 [2024-12-09 09:30:05.979388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:94968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.986 [2024-12-09 09:30:05.979402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.986 [2024-12-09 09:30:05.979422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:95520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.986 [2024-12-09 09:30:05.979437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.986 [2024-12-09 09:30:05.979452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:95528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.986 [2024-12-09 09:30:05.979478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.986 [2024-12-09 09:30:05.979494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:95536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.987 [2024-12-09 09:30:05.979508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.987 [2024-12-09 09:30:05.979530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:95544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.987 [2024-12-09 09:30:05.979545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.987 [2024-12-09 09:30:05.979561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:95552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.987 [2024-12-09 09:30:05.979575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.987 [2024-12-09 09:30:05.979590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.987 [2024-12-09 09:30:05.979605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.987 [2024-12-09 09:30:05.979620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:95568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.987 [2024-12-09 09:30:05.979634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.987 [2024-12-09 09:30:05.979650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:95576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.987 [2024-12-09 09:30:05.979664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.987 [2024-12-09 09:30:05.979679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:95584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.987 [2024-12-09 09:30:05.979694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.987 [2024-12-09 09:30:05.979709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:95592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.987 [2024-12-09 09:30:05.979723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.987 [2024-12-09 09:30:05.979739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:95600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.987 [2024-12-09 09:30:05.979753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.987 [2024-12-09 09:30:05.979768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:95608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.987 [2024-12-09 09:30:05.979782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.987 [2024-12-09 09:30:05.979798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:95616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.987 [2024-12-09 09:30:05.979813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.987 [2024-12-09 09:30:05.979829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:95624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.987 [2024-12-09 09:30:05.979843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.987 [2024-12-09 09:30:05.979858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:94976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.987 [2024-12-09 09:30:05.979872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.987 [2024-12-09 09:30:05.979889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:94984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.987 [2024-12-09 09:30:05.979909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.987 [2024-12-09 09:30:05.979926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:94992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.987 [2024-12-09 09:30:05.979940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.987 [2024-12-09 09:30:05.979956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:95000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.987 [2024-12-09 09:30:05.979970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.987 [2024-12-09 09:30:05.979986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.987 [2024-12-09 09:30:05.980000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.987 [2024-12-09 09:30:05.980023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:95016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.987 [2024-12-09 09:30:05.980046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.987 [2024-12-09 09:30:05.980070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:95024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.987 [2024-12-09 09:30:05.980094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.987 [2024-12-09 09:30:05.980110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.987 [2024-12-09 09:30:05.980125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.987 [2024-12-09 09:30:05.980141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:95632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.987 [2024-12-09 09:30:05.980155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.987 [2024-12-09 09:30:05.980170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:95640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.987 [2024-12-09 09:30:05.980184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.987 [2024-12-09 09:30:05.980211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:95648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.987 [2024-12-09 09:30:05.980224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.987 [2024-12-09 09:30:05.980239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:95656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.987 [2024-12-09 09:30:05.980269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.987 [2024-12-09 09:30:05.980285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:95664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.987 [2024-12-09 09:30:05.980299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.987 [2024-12-09 09:30:05.980315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:95672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.987 [2024-12-09 09:30:05.980329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.987 [2024-12-09 09:30:05.980344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.987 [2024-12-09 09:30:05.980364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.987 [2024-12-09 09:30:05.980380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:95688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.987 [2024-12-09 09:30:05.980394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.987 [2024-12-09 09:30:05.980409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:95696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.987 [2024-12-09 09:30:05.980423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.987 [2024-12-09 09:30:05.980439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:95704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.987 [2024-12-09 09:30:05.980454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.987 [2024-12-09 09:30:05.980470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:95712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.987 [2024-12-09 09:30:05.980493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.987 [2024-12-09 09:30:05.980509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:95720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.987 [2024-12-09 09:30:05.980524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.987 [2024-12-09 09:30:05.980539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:95728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.987 [2024-12-09 09:30:05.980553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.987 [2024-12-09 09:30:05.980569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:95736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.987 [2024-12-09 09:30:05.980583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.987 [2024-12-09 09:30:05.980599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:95040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.987 [2024-12-09 09:30:05.980613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.987 [2024-12-09 09:30:05.980629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:95048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.987 [2024-12-09 09:30:05.980643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.987 [2024-12-09 09:30:05.980659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.987 [2024-12-09 09:30:05.980673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.987 [2024-12-09 09:30:05.980688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.987 [2024-12-09 09:30:05.980702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.987 [2024-12-09 09:30:05.980718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:95072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.987 [2024-12-09 09:30:05.980732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.987 [2024-12-09 09:30:05.980756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:95080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.987 [2024-12-09 09:30:05.980771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.988 [2024-12-09 09:30:05.980787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.988 [2024-12-09 09:30:05.980801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.988 [2024-12-09 09:30:05.980816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.988 [2024-12-09 09:30:05.980831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.988 [2024-12-09 09:30:05.980846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:95104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.988 [2024-12-09 09:30:05.980860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.988 [2024-12-09 09:30:05.980876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.988 [2024-12-09 09:30:05.980890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.988 [2024-12-09 09:30:05.980906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.988 [2024-12-09 09:30:05.980920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.988 [2024-12-09 09:30:05.980937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:95128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.988 [2024-12-09 09:30:05.980951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.988 [2024-12-09 09:30:05.980968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:95136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.988 [2024-12-09 09:30:05.980982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.988 [2024-12-09 09:30:05.980998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:95144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.988 [2024-12-09 09:30:05.981012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.988 [2024-12-09 09:30:05.981027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:95152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.988 [2024-12-09 09:30:05.981042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.988 [2024-12-09 09:30:05.981057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:95160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.988 [2024-12-09 09:30:05.981071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.988 [2024-12-09 09:30:05.981087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:95168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.988 [2024-12-09 09:30:05.981101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.988 [2024-12-09 09:30:05.981117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:95176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.988 [2024-12-09 09:30:05.981137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.988 [2024-12-09 09:30:05.981153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.988 [2024-12-09 09:30:05.981167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.988 [2024-12-09 09:30:05.981182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:95192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.988 [2024-12-09 09:30:05.981204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.988 [2024-12-09 09:30:05.981225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.988 [2024-12-09 09:30:05.981240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.988 [2024-12-09 09:30:05.981255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:95208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.988 [2024-12-09 09:30:05.981270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.988 [2024-12-09 09:30:05.981285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:95216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.988 [2024-12-09 09:30:05.981300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.988 [2024-12-09 09:30:05.981316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:95224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.988 [2024-12-09 09:30:05.981330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.988 [2024-12-09 09:30:05.981346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:95744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.988 [2024-12-09 09:30:05.981360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.988 [2024-12-09 09:30:05.981375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:95752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.988 [2024-12-09 09:30:05.981389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.988 [2024-12-09 09:30:05.981406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:95760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.988 [2024-12-09 09:30:05.981420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.988 [2024-12-09 09:30:05.981435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:95768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.988 [2024-12-09 09:30:05.981449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.988 [2024-12-09 09:30:05.981476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:95776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.988 [2024-12-09 09:30:05.981491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.988 [2024-12-09 09:30:05.981506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:95784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.988 [2024-12-09 09:30:05.981520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.988 [2024-12-09 09:30:05.981542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:95792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.988 [2024-12-09 09:30:05.981557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.988 [2024-12-09 09:30:05.981573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:95800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.988 [2024-12-09 09:30:05.981587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.988 [2024-12-09 09:30:05.981603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.988 [2024-12-09 09:30:05.981617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.988 [2024-12-09 09:30:05.981633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:95240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.988 [2024-12-09 09:30:05.981647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.988 [2024-12-09 09:30:05.981663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:95248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.988 [2024-12-09 09:30:05.981677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.988 [2024-12-09 09:30:05.981693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:95256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.988 [2024-12-09 09:30:05.981707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.988 [2024-12-09 09:30:05.981723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:95264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.988 [2024-12-09 09:30:05.981737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.988 [2024-12-09 09:30:05.981753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.988 [2024-12-09 09:30:05.981767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.988 [2024-12-09 09:30:05.981782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:95280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.988 [2024-12-09 09:30:05.981797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.988 [2024-12-09 09:30:05.981812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:95288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.988 [2024-12-09 09:30:05.981826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.988 [2024-12-09 09:30:05.981842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.988 [2024-12-09 09:30:05.981856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.988 [2024-12-09 09:30:05.981872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:95304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.988 [2024-12-09 09:30:05.981886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.988 [2024-12-09 09:30:05.981902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:95312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.988 [2024-12-09 09:30:05.981916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.988 [2024-12-09 09:30:05.981936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:95320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.988 [2024-12-09 09:30:05.981951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.988 [2024-12-09 09:30:05.981968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:95328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.988 [2024-12-09 09:30:05.981983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.988 [2024-12-09 09:30:05.981998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:95336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.988 [2024-12-09 09:30:05.982013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.989 [2024-12-09 09:30:05.982028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.989 [2024-12-09 09:30:05.982042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.989 [2024-12-09 09:30:05.982069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:95352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.989 [2024-12-09 09:30:05.982084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.989 [2024-12-09 09:30:05.982100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:95808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.989 [2024-12-09 09:30:05.982114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.989 [2024-12-09 09:30:05.982130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:95816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.989 [2024-12-09 09:30:05.982145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.989 [2024-12-09 09:30:05.982160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:95824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.989 [2024-12-09 09:30:05.982174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.989 [2024-12-09 09:30:05.982190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:95832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.989 [2024-12-09 09:30:05.982205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.989 [2024-12-09 09:30:05.982220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:95840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.989 [2024-12-09 09:30:05.982234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.989 [2024-12-09 09:30:05.982250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:95848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.989 [2024-12-09 09:30:05.982264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.989 [2024-12-09 09:30:05.982280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:95856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.989 [2024-12-09 09:30:05.982294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.989 [2024-12-09 09:30:05.982310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:95864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.989 [2024-12-09 09:30:05.982330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.989 [2024-12-09 09:30:05.982346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.989 [2024-12-09 09:30:05.982370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.989 [2024-12-09 09:30:05.982394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.989 [2024-12-09 09:30:05.982413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.989 [2024-12-09 09:30:05.982429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.989 [2024-12-09 09:30:05.982444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.989 [2024-12-09 09:30:05.982469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.989 [2024-12-09 09:30:05.982484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.989 [2024-12-09 09:30:05.982501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:95392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.989 [2024-12-09 09:30:05.982516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.989 [2024-12-09 09:30:05.982531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.989 [2024-12-09 09:30:05.982546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.989 [2024-12-09 09:30:05.982562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:95408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.989 [2024-12-09 09:30:05.982576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.989 [2024-12-09 09:30:05.982591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:95416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.989 [2024-12-09 09:30:05.982606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.989 [2024-12-09 09:30:05.982621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:95424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.989 [2024-12-09 09:30:05.982637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.989 [2024-12-09 09:30:05.982653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:95432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.989 [2024-12-09 09:30:05.982667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.989 [2024-12-09 09:30:05.982683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.989 [2024-12-09 09:30:05.982697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.989 [2024-12-09 09:30:05.982712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.989 [2024-12-09 09:30:05.982726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.989 [2024-12-09 09:30:05.982748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.989 [2024-12-09 09:30:05.982763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.989 [2024-12-09 09:30:05.982778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:95464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.989 [2024-12-09 09:30:05.982793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.989 [2024-12-09 09:30:05.982808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:95472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.989 [2024-12-09 09:30:05.982823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.989 [2024-12-09 09:30:05.982839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c59c0 is same with the state(6) to be set 00:19:42.989 [2024-12-09 09:30:05.982856] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:42.989 [2024-12-09 09:30:05.982866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:42.989 [2024-12-09 09:30:05.982877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95480 len:8 PRP1 0x0 PRP2 0x0 00:19:42.989 [2024-12-09 09:30:05.982892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.989 [2024-12-09 09:30:05.982949] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:19:42.989 [2024-12-09 09:30:05.982999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:42.989 [2024-12-09 09:30:05.983016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.989 [2024-12-09 09:30:05.983031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:42.989 [2024-12-09 09:30:05.983045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.989 [2024-12-09 09:30:05.983061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:42.989 [2024-12-09 09:30:05.983075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.989 [2024-12-09 09:30:05.983090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:42.989 [2024-12-09 09:30:05.983104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.989 [2024-12-09 09:30:05.983119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:42.989 [2024-12-09 09:30:05.986163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:42.989 [2024-12-09 09:30:05.986202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2054c60 (9): Bad file descriptor 00:19:42.989 [2024-12-09 09:30:06.014357] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:19:42.989 10504.50 IOPS, 41.03 MiB/s [2024-12-09T09:30:20.712Z] 10596.33 IOPS, 41.39 MiB/s [2024-12-09T09:30:20.712Z] 10904.00 IOPS, 42.59 MiB/s [2024-12-09T09:30:20.712Z] [2024-12-09 09:30:09.478285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:25880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.989 [2024-12-09 09:30:09.478347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.989 [2024-12-09 09:30:09.478391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:25888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.989 [2024-12-09 09:30:09.478405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.989 [2024-12-09 09:30:09.478420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.989 [2024-12-09 09:30:09.478433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.989 [2024-12-09 09:30:09.478447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:25904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.989 [2024-12-09 09:30:09.478471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.989 [2024-12-09 09:30:09.478486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:25912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.989 [2024-12-09 09:30:09.478499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.989 [2024-12-09 09:30:09.478513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:25920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.989 [2024-12-09 09:30:09.478526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.990 [2024-12-09 09:30:09.478541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.990 [2024-12-09 09:30:09.478554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.990 [2024-12-09 09:30:09.478568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.990 [2024-12-09 09:30:09.478580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.990 [2024-12-09 09:30:09.478595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:26392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.990 [2024-12-09 09:30:09.478608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.990 [2024-12-09 09:30:09.478622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:26400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.990 [2024-12-09 09:30:09.478635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.990 [2024-12-09 09:30:09.478650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:26408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.990 [2024-12-09 09:30:09.478662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.990 [2024-12-09 09:30:09.478677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:26416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.990 [2024-12-09 09:30:09.478690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.990 [2024-12-09 09:30:09.478704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:26424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.990 [2024-12-09 09:30:09.478717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.990 [2024-12-09 09:30:09.478731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:26432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.990 [2024-12-09 09:30:09.478750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.990 [2024-12-09 09:30:09.478764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:26440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.990 [2024-12-09 09:30:09.478778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.990 [2024-12-09 09:30:09.478792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:26448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.990 [2024-12-09 09:30:09.478805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.990 [2024-12-09 09:30:09.478819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:26456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.990 [2024-12-09 09:30:09.478832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.990 [2024-12-09 09:30:09.478847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.990 [2024-12-09 09:30:09.478861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.990 [2024-12-09 09:30:09.478875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.990 [2024-12-09 09:30:09.478888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.990 [2024-12-09 09:30:09.478902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:26480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.990 [2024-12-09 09:30:09.478915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.990 [2024-12-09 09:30:09.478929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:26488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.990 [2024-12-09 09:30:09.478942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.990 [2024-12-09 09:30:09.478956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:26496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.990 [2024-12-09 09:30:09.478968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.990 [2024-12-09 09:30:09.478982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.990 [2024-12-09 09:30:09.478995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.990 [2024-12-09 09:30:09.479009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:26512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.990 [2024-12-09 09:30:09.479021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.990 [2024-12-09 09:30:09.479035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:26520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.990 [2024-12-09 09:30:09.479048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.990 [2024-12-09 09:30:09.479062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:26528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.990 [2024-12-09 09:30:09.479075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.990 [2024-12-09 09:30:09.479088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:26536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.990 [2024-12-09 09:30:09.479106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.990 [2024-12-09 09:30:09.479121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:26544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.990 [2024-12-09 09:30:09.479134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.990 [2024-12-09 09:30:09.479148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:26552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.990 [2024-12-09 09:30:09.479161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.990 [2024-12-09 09:30:09.479175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:26560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.990 [2024-12-09 09:30:09.479188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.990 [2024-12-09 09:30:09.479202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:26568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.990 [2024-12-09 09:30:09.479215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.990 [2024-12-09 09:30:09.479229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.990 [2024-12-09 09:30:09.479241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.990 [2024-12-09 09:30:09.479256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:25952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.990 [2024-12-09 09:30:09.479268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.990 [2024-12-09 09:30:09.479283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:25960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.990 [2024-12-09 09:30:09.479296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.990 [2024-12-09 09:30:09.479310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.990 [2024-12-09 09:30:09.479323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.990 [2024-12-09 09:30:09.479337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.990 [2024-12-09 09:30:09.479350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.990 [2024-12-09 09:30:09.479364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:25984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.990 [2024-12-09 09:30:09.479378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.990 [2024-12-09 09:30:09.479392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:25992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.990 [2024-12-09 09:30:09.479405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.990 [2024-12-09 09:30:09.479419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:26000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.990 [2024-12-09 09:30:09.479432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.990 [2024-12-09 09:30:09.479451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:26576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.990 [2024-12-09 09:30:09.479472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.990 [2024-12-09 09:30:09.479486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:26584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.990 [2024-12-09 09:30:09.479499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.991 [2024-12-09 09:30:09.479513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:26592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.991 [2024-12-09 09:30:09.479526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.991 [2024-12-09 09:30:09.479540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.991 [2024-12-09 09:30:09.479553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.991 [2024-12-09 09:30:09.479567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:26608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.991 [2024-12-09 09:30:09.479580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.991 [2024-12-09 09:30:09.479594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.991 [2024-12-09 09:30:09.479607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.991 [2024-12-09 09:30:09.479621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:26624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.991 [2024-12-09 09:30:09.479633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.991 [2024-12-09 09:30:09.479648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:26632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.991 [2024-12-09 09:30:09.479660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.991 [2024-12-09 09:30:09.479674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:26640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.991 [2024-12-09 09:30:09.479687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.991 [2024-12-09 09:30:09.479702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:26648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.991 [2024-12-09 09:30:09.479714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.991 [2024-12-09 09:30:09.479728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:26656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.991 [2024-12-09 09:30:09.479741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.991 [2024-12-09 09:30:09.479756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.991 [2024-12-09 09:30:09.479768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.991 [2024-12-09 09:30:09.479783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:26672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.991 [2024-12-09 09:30:09.479800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.991 [2024-12-09 09:30:09.479814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:26680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.991 [2024-12-09 09:30:09.479828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.991 [2024-12-09 09:30:09.479842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.991 [2024-12-09 09:30:09.479855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.991 [2024-12-09 09:30:09.479869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:26696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.991 [2024-12-09 09:30:09.479882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.991 [2024-12-09 09:30:09.479896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.991 [2024-12-09 09:30:09.479909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.991 [2024-12-09 09:30:09.479923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:26016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.991 [2024-12-09 09:30:09.479936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.991 [2024-12-09 09:30:09.479950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.991 [2024-12-09 09:30:09.479963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.991 [2024-12-09 09:30:09.479977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:26032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.991 [2024-12-09 09:30:09.479990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.991 [2024-12-09 09:30:09.480004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:26040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.991 [2024-12-09 09:30:09.480017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.991 [2024-12-09 09:30:09.480031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.991 [2024-12-09 09:30:09.480044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.991 [2024-12-09 09:30:09.480058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:26056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.991 [2024-12-09 09:30:09.480070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.991 [2024-12-09 09:30:09.480085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:26064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.991 [2024-12-09 09:30:09.480097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.991 [2024-12-09 09:30:09.480112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:26072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.991 [2024-12-09 09:30:09.480124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.991 [2024-12-09 09:30:09.480143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:26080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.991 [2024-12-09 09:30:09.480156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.991 [2024-12-09 09:30:09.480170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:26088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.991 [2024-12-09 09:30:09.480183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.991 [2024-12-09 09:30:09.480197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:26096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.991 [2024-12-09 09:30:09.480210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.991 [2024-12-09 09:30:09.480224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:26104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.991 [2024-12-09 09:30:09.480237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.991 [2024-12-09 09:30:09.480251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.991 [2024-12-09 09:30:09.480264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.991 [2024-12-09 09:30:09.480278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:26120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.991 [2024-12-09 09:30:09.480291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.991 [2024-12-09 09:30:09.480305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:26128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.991 [2024-12-09 09:30:09.480318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.991 [2024-12-09 09:30:09.480332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:26136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.991 [2024-12-09 09:30:09.480345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.991 [2024-12-09 09:30:09.480359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:26144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.991 [2024-12-09 09:30:09.480372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.991 [2024-12-09 09:30:09.480386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:26152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.991 [2024-12-09 09:30:09.480399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.991 [2024-12-09 09:30:09.480413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:26160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.991 [2024-12-09 09:30:09.480426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.991 [2024-12-09 09:30:09.480440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:26168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.991 [2024-12-09 09:30:09.480453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.991 [2024-12-09 09:30:09.480474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:26176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.991 [2024-12-09 09:30:09.480492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.991 [2024-12-09 09:30:09.480506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:26184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.991 [2024-12-09 09:30:09.480519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.991 [2024-12-09 09:30:09.480534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:26192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.992 [2024-12-09 09:30:09.480546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.992 [2024-12-09 09:30:09.480561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:26704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.992 [2024-12-09 09:30:09.480574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.992 [2024-12-09 09:30:09.480589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:26712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.992 [2024-12-09 09:30:09.480601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.992 [2024-12-09 09:30:09.480615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:26720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.992 [2024-12-09 09:30:09.480628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.992 [2024-12-09 09:30:09.480642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:26728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.992 [2024-12-09 09:30:09.480655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.992 [2024-12-09 09:30:09.480669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:26736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.992 [2024-12-09 09:30:09.480682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.992 [2024-12-09 09:30:09.480696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:26744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.992 [2024-12-09 09:30:09.480709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.992 [2024-12-09 09:30:09.480723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:26752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.992 [2024-12-09 09:30:09.480736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.992 [2024-12-09 09:30:09.480750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:26760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.992 [2024-12-09 09:30:09.480763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.992 [2024-12-09 09:30:09.480777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:26768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.992 [2024-12-09 09:30:09.480795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.992 [2024-12-09 09:30:09.480810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:26200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.992 [2024-12-09 09:30:09.480823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.992 [2024-12-09 09:30:09.480837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:26208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.992 [2024-12-09 09:30:09.480854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.992 [2024-12-09 09:30:09.480869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:26216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.992 [2024-12-09 09:30:09.480881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.992 [2024-12-09 09:30:09.480896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:26224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.992 [2024-12-09 09:30:09.480909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.992 [2024-12-09 09:30:09.480923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:26232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.992 [2024-12-09 09:30:09.480936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.992 [2024-12-09 09:30:09.480950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:26240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.992 [2024-12-09 09:30:09.480963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.992 [2024-12-09 09:30:09.480977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:26248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.992 [2024-12-09 09:30:09.480990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.992 [2024-12-09 09:30:09.481004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:26256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.992 [2024-12-09 09:30:09.481016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.992 [2024-12-09 09:30:09.481031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:26264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.992 [2024-12-09 09:30:09.481044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.992 [2024-12-09 09:30:09.481058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:26272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.992 [2024-12-09 09:30:09.481071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.992 [2024-12-09 09:30:09.481085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:26280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.992 [2024-12-09 09:30:09.481098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.992 [2024-12-09 09:30:09.481112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:26288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.992 [2024-12-09 09:30:09.481124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.992 [2024-12-09 09:30:09.481139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:26296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.992 [2024-12-09 09:30:09.481151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.992 [2024-12-09 09:30:09.481166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:26304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.992 [2024-12-09 09:30:09.481179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.992 [2024-12-09 09:30:09.481199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:26312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.992 [2024-12-09 09:30:09.481212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.992 [2024-12-09 09:30:09.481226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:26320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.992 [2024-12-09 09:30:09.481240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.992 [2024-12-09 09:30:09.481255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:26776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.992 [2024-12-09 09:30:09.481267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.992 [2024-12-09 09:30:09.481281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:26784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.992 [2024-12-09 09:30:09.481294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.992 [2024-12-09 09:30:09.481308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:26792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.992 [2024-12-09 09:30:09.481321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.992 [2024-12-09 09:30:09.481335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:26800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.992 [2024-12-09 09:30:09.481348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.992 [2024-12-09 09:30:09.481362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:26808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.992 [2024-12-09 09:30:09.481375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.992 [2024-12-09 09:30:09.481389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:26816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.992 [2024-12-09 09:30:09.481401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.992 [2024-12-09 09:30:09.481415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:26824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.992 [2024-12-09 09:30:09.481428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.992 [2024-12-09 09:30:09.481442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:26832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.992 [2024-12-09 09:30:09.481455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.992 [2024-12-09 09:30:09.481477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:26840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.992 [2024-12-09 09:30:09.481489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.992 [2024-12-09 09:30:09.481504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:26848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.992 [2024-12-09 09:30:09.481517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.992 [2024-12-09 09:30:09.481531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:26856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.992 [2024-12-09 09:30:09.481548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.992 [2024-12-09 09:30:09.481563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:26864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.992 [2024-12-09 09:30:09.481575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.992 [2024-12-09 09:30:09.481590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:26872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.992 [2024-12-09 09:30:09.481603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.992 [2024-12-09 09:30:09.481617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:26880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.992 [2024-12-09 09:30:09.481630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.992 [2024-12-09 09:30:09.481644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:26888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.993 [2024-12-09 09:30:09.481657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.993 [2024-12-09 09:30:09.481671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:26896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.993 [2024-12-09 09:30:09.481685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.993 [2024-12-09 09:30:09.481699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:26328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.993 [2024-12-09 09:30:09.481712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.993 [2024-12-09 09:30:09.481726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:26336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.993 [2024-12-09 09:30:09.481739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.993 [2024-12-09 09:30:09.481753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:26344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.993 [2024-12-09 09:30:09.481766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.993 [2024-12-09 09:30:09.481781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:26352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.993 [2024-12-09 09:30:09.481793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.993 [2024-12-09 09:30:09.481807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:26360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.993 [2024-12-09 09:30:09.481820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.993 [2024-12-09 09:30:09.481834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:26368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.993 [2024-12-09 09:30:09.481847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.993 [2024-12-09 09:30:09.481861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:26376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.993 [2024-12-09 09:30:09.481874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.993 [2024-12-09 09:30:09.481917] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:42.993 [2024-12-09 09:30:09.481928] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:42.993 [2024-12-09 09:30:09.481939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26384 len:8 PRP1 0x0 PRP2 0x0 00:19:42.993 [2024-12-09 09:30:09.481952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.993 [2024-12-09 09:30:09.482004] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:19:42.993 [2024-12-09 09:30:09.482046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:42.993 [2024-12-09 09:30:09.482071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.993 [2024-12-09 09:30:09.482085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:42.993 [2024-12-09 09:30:09.482098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.993 [2024-12-09 09:30:09.482111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:42.993 [2024-12-09 09:30:09.482124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.993 [2024-12-09 09:30:09.482137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:42.993 [2024-12-09 09:30:09.482150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.993 [2024-12-09 09:30:09.482163] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:42.993 [2024-12-09 09:30:09.484905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:42.993 [2024-12-09 09:30:09.484940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2054c60 (9): Bad file descriptor 00:19:42.993 [2024-12-09 09:30:09.515901] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:19:42.993 10993.00 IOPS, 42.94 MiB/s [2024-12-09T09:30:20.716Z] 11012.67 IOPS, 43.02 MiB/s [2024-12-09T09:30:20.716Z] 11015.57 IOPS, 43.03 MiB/s [2024-12-09T09:30:20.716Z] 11014.62 IOPS, 43.03 MiB/s [2024-12-09T09:30:20.716Z] 11011.11 IOPS, 43.01 MiB/s [2024-12-09T09:30:20.716Z] [2024-12-09 09:30:13.959486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:40848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.993 [2024-12-09 09:30:13.959548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.993 [2024-12-09 09:30:13.959572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:41176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.993 [2024-12-09 09:30:13.959588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.993 [2024-12-09 09:30:13.959604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:41184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.993 [2024-12-09 09:30:13.959619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.993 [2024-12-09 09:30:13.959634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:41192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.993 [2024-12-09 09:30:13.959648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.993 [2024-12-09 09:30:13.959664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:41200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.993 [2024-12-09 09:30:13.959699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.993 [2024-12-09 09:30:13.959715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:41208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.993 [2024-12-09 09:30:13.959729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.993 [2024-12-09 09:30:13.959745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:41216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.993 [2024-12-09 09:30:13.959758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.993 [2024-12-09 09:30:13.959774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:41224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.993 [2024-12-09 09:30:13.959788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.993 [2024-12-09 09:30:13.959804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:41232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.993 [2024-12-09 09:30:13.959818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.993 [2024-12-09 09:30:13.959834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.993 [2024-12-09 09:30:13.959847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.993 [2024-12-09 09:30:13.959863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:41248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.993 [2024-12-09 09:30:13.959877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.993 [2024-12-09 09:30:13.959894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:41256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.993 [2024-12-09 09:30:13.959927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.993 [2024-12-09 09:30:13.959950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:41264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.993 [2024-12-09 09:30:13.959973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.993 [2024-12-09 09:30:13.959996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:41272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.993 [2024-12-09 09:30:13.960018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.993 [2024-12-09 09:30:13.960042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:41280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.993 [2024-12-09 09:30:13.960062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.993 [2024-12-09 09:30:13.960080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:41288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.993 [2024-12-09 09:30:13.960093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.993 [2024-12-09 09:30:13.960109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.993 [2024-12-09 09:30:13.960122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.993 [2024-12-09 09:30:13.960146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:41304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.993 [2024-12-09 09:30:13.960160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.993 [2024-12-09 09:30:13.960175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:41312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.993 [2024-12-09 09:30:13.960189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.993 [2024-12-09 09:30:13.960204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:41320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.993 [2024-12-09 09:30:13.960221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.993 [2024-12-09 09:30:13.960235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:41328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.993 [2024-12-09 09:30:13.960250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.993 [2024-12-09 09:30:13.960265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:40856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.993 [2024-12-09 09:30:13.960279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.993 [2024-12-09 09:30:13.960294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:40864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.993 [2024-12-09 09:30:13.960308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.994 [2024-12-09 09:30:13.960323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:40872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.994 [2024-12-09 09:30:13.960337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.994 [2024-12-09 09:30:13.960352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:40880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.994 [2024-12-09 09:30:13.960382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.994 [2024-12-09 09:30:13.960398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:40888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.994 [2024-12-09 09:30:13.960412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.994 [2024-12-09 09:30:13.960428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:40896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.994 [2024-12-09 09:30:13.960443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.994 [2024-12-09 09:30:13.960458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:40904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.994 [2024-12-09 09:30:13.960472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.994 [2024-12-09 09:30:13.960501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:40912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.994 [2024-12-09 09:30:13.960515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.994 [2024-12-09 09:30:13.960531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:41336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.994 [2024-12-09 09:30:13.960554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.994 [2024-12-09 09:30:13.960571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:41344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.994 [2024-12-09 09:30:13.960586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.994 [2024-12-09 09:30:13.960602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:41352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.994 [2024-12-09 09:30:13.960616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.994 [2024-12-09 09:30:13.960632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:41360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.994 [2024-12-09 09:30:13.960646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.994 [2024-12-09 09:30:13.960662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:41368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.994 [2024-12-09 09:30:13.960677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.994 [2024-12-09 09:30:13.960692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.994 [2024-12-09 09:30:13.960706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.994 [2024-12-09 09:30:13.960722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:41384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.994 [2024-12-09 09:30:13.960737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.994 [2024-12-09 09:30:13.960752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:41392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.994 [2024-12-09 09:30:13.960766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.994 [2024-12-09 09:30:13.960781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:41400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.994 [2024-12-09 09:30:13.960796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.994 [2024-12-09 09:30:13.960811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.994 [2024-12-09 09:30:13.960825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.994 [2024-12-09 09:30:13.960840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:41416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.994 [2024-12-09 09:30:13.960854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.994 [2024-12-09 09:30:13.960870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:41424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.994 [2024-12-09 09:30:13.960884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.994 [2024-12-09 09:30:13.960900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.994 [2024-12-09 09:30:13.960914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.994 [2024-12-09 09:30:13.960930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:41440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.994 [2024-12-09 09:30:13.960949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.994 [2024-12-09 09:30:13.960965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.994 [2024-12-09 09:30:13.960979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.994 [2024-12-09 09:30:13.960994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.994 [2024-12-09 09:30:13.961008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.994 [2024-12-09 09:30:13.961024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.994 [2024-12-09 09:30:13.961038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.994 [2024-12-09 09:30:13.961054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:41472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.994 [2024-12-09 09:30:13.961067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.994 [2024-12-09 09:30:13.961083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:40920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.994 [2024-12-09 09:30:13.961097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.994 [2024-12-09 09:30:13.961113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:40928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.994 [2024-12-09 09:30:13.961129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.994 [2024-12-09 09:30:13.961145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:40936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.994 [2024-12-09 09:30:13.961159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.994 [2024-12-09 09:30:13.961174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:40944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.994 [2024-12-09 09:30:13.961189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.994 [2024-12-09 09:30:13.961204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:40952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.994 [2024-12-09 09:30:13.961218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.994 [2024-12-09 09:30:13.961234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:40960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.994 [2024-12-09 09:30:13.961248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.994 [2024-12-09 09:30:13.961264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:40968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.994 [2024-12-09 09:30:13.961279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.994 [2024-12-09 09:30:13.961303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:40976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.994 [2024-12-09 09:30:13.961326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.994 [2024-12-09 09:30:13.961368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:40984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.994 [2024-12-09 09:30:13.961391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.994 [2024-12-09 09:30:13.961425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:40992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.994 [2024-12-09 09:30:13.961439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.994 [2024-12-09 09:30:13.961455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:41000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.994 [2024-12-09 09:30:13.961469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.994 [2024-12-09 09:30:13.961494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:41008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.994 [2024-12-09 09:30:13.961508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.994 [2024-12-09 09:30:13.961524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:41016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.994 [2024-12-09 09:30:13.961539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.994 [2024-12-09 09:30:13.961565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:41024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.994 [2024-12-09 09:30:13.961595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.994 [2024-12-09 09:30:13.961611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:41032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.994 [2024-12-09 09:30:13.961625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.994 [2024-12-09 09:30:13.961641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:41040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.995 [2024-12-09 09:30:13.961655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.995 [2024-12-09 09:30:13.961670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:41480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.995 [2024-12-09 09:30:13.961684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.995 [2024-12-09 09:30:13.961700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:41488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.995 [2024-12-09 09:30:13.961714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.995 [2024-12-09 09:30:13.961730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:41496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.995 [2024-12-09 09:30:13.961744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.995 [2024-12-09 09:30:13.961760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:41504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.995 [2024-12-09 09:30:13.961774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.995 [2024-12-09 09:30:13.961789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.995 [2024-12-09 09:30:13.961809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.995 [2024-12-09 09:30:13.961824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:41520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.995 [2024-12-09 09:30:13.961838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.995 [2024-12-09 09:30:13.961854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.995 [2024-12-09 09:30:13.961868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.995 [2024-12-09 09:30:13.961883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:41536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.995 [2024-12-09 09:30:13.961898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.995 [2024-12-09 09:30:13.961914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.995 [2024-12-09 09:30:13.961928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.995 [2024-12-09 09:30:13.961943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:41552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.995 [2024-12-09 09:30:13.961957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.995 [2024-12-09 09:30:13.961973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:41048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.995 [2024-12-09 09:30:13.961987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.995 [2024-12-09 09:30:13.962003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.995 [2024-12-09 09:30:13.962017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.995 [2024-12-09 09:30:13.962032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:41064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.995 [2024-12-09 09:30:13.962046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.995 [2024-12-09 09:30:13.962073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:41072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.995 [2024-12-09 09:30:13.962087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.995 [2024-12-09 09:30:13.962103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:41080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.995 [2024-12-09 09:30:13.962117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.995 [2024-12-09 09:30:13.962132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:41088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.995 [2024-12-09 09:30:13.962146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.995 [2024-12-09 09:30:13.962162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:41096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.995 [2024-12-09 09:30:13.962177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.995 [2024-12-09 09:30:13.962199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:41104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.995 [2024-12-09 09:30:13.962213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.995 [2024-12-09 09:30:13.962229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.995 [2024-12-09 09:30:13.962243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.995 [2024-12-09 09:30:13.962259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:41568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.995 [2024-12-09 09:30:13.962272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.995 [2024-12-09 09:30:13.962288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.995 [2024-12-09 09:30:13.962302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.995 [2024-12-09 09:30:13.962317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.995 [2024-12-09 09:30:13.962331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.995 [2024-12-09 09:30:13.962347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:41592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.995 [2024-12-09 09:30:13.962362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.995 [2024-12-09 09:30:13.962377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:41600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.995 [2024-12-09 09:30:13.962392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.995 [2024-12-09 09:30:13.962408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:41608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.995 [2024-12-09 09:30:13.962422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.995 [2024-12-09 09:30:13.962437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:41616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.995 [2024-12-09 09:30:13.962451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.995 [2024-12-09 09:30:13.962475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:41624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.995 [2024-12-09 09:30:13.962490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.995 [2024-12-09 09:30:13.962507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.995 [2024-12-09 09:30:13.962520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.995 [2024-12-09 09:30:13.962536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:41640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.995 [2024-12-09 09:30:13.962550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.995 [2024-12-09 09:30:13.962566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:41648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.995 [2024-12-09 09:30:13.962581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.995 [2024-12-09 09:30:13.962609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:41656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.995 [2024-12-09 09:30:13.962624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.995 [2024-12-09 09:30:13.962640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:41664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.995 [2024-12-09 09:30:13.962654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.995 [2024-12-09 09:30:13.962669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:41672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.995 [2024-12-09 09:30:13.962684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.995 [2024-12-09 09:30:13.962700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:41680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.995 [2024-12-09 09:30:13.962714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.995 [2024-12-09 09:30:13.962729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:41688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.995 [2024-12-09 09:30:13.962743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.995 [2024-12-09 09:30:13.962759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:41696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.996 [2024-12-09 09:30:13.962773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.996 [2024-12-09 09:30:13.962789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:41704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.996 [2024-12-09 09:30:13.962804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.996 [2024-12-09 09:30:13.962819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:41712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.996 [2024-12-09 09:30:13.962834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.996 [2024-12-09 09:30:13.962849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:41720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.996 [2024-12-09 09:30:13.962863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.996 [2024-12-09 09:30:13.962879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:41728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.996 [2024-12-09 09:30:13.962893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.996 [2024-12-09 09:30:13.962908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:41736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:42.996 [2024-12-09 09:30:13.962922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.996 [2024-12-09 09:30:13.962938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:41112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.996 [2024-12-09 09:30:13.962952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.996 [2024-12-09 09:30:13.962967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:41120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.996 [2024-12-09 09:30:13.962987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.996 [2024-12-09 09:30:13.963004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:41128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.996 [2024-12-09 09:30:13.963018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.996 [2024-12-09 09:30:13.963034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:41136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.996 [2024-12-09 09:30:13.963048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.996 [2024-12-09 09:30:13.963063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.996 [2024-12-09 09:30:13.963077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.996 [2024-12-09 09:30:13.963093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:41152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.996 [2024-12-09 09:30:13.963107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.996 [2024-12-09 09:30:13.963123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:41160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:42.996 [2024-12-09 09:30:13.963137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.996 [2024-12-09 09:30:13.963152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d55e0 is same with the state(6) to be set 00:19:42.996 [2024-12-09 09:30:13.963170] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:42.996 [2024-12-09 09:30:13.963181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:42.996 [2024-12-09 09:30:13.963192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41168 len:8 PRP1 0x0 PRP2 0x0 00:19:42.996 [2024-12-09 09:30:13.963206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.996 [2024-12-09 09:30:13.963221] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:42.996 [2024-12-09 09:30:13.963231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:42.996 [2024-12-09 09:30:13.963242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41744 len:8 PRP1 0x0 PRP2 0x0 00:19:42.996 [2024-12-09 09:30:13.963256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.996 [2024-12-09 09:30:13.963270] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:42.996 [2024-12-09 09:30:13.963280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:42.996 [2024-12-09 09:30:13.963291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41752 len:8 PRP1 0x0 PRP2 0x0 00:19:42.996 [2024-12-09 09:30:13.963305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.996 [2024-12-09 09:30:13.963319] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:42.996 [2024-12-09 09:30:13.963329] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:42.996 [2024-12-09 09:30:13.963340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41760 len:8 PRP1 0x0 PRP2 0x0 00:19:42.996 [2024-12-09 09:30:13.963354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.996 [2024-12-09 09:30:13.963372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:42.996 [2024-12-09 09:30:13.963383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:42.996 [2024-12-09 09:30:13.963394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41768 len:8 PRP1 0x0 PRP2 0x0 00:19:42.996 [2024-12-09 09:30:13.963407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.996 [2024-12-09 09:30:13.963421] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:42.996 [2024-12-09 09:30:13.963432] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:42.996 [2024-12-09 09:30:13.963442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41776 len:8 PRP1 0x0 PRP2 0x0 00:19:42.996 [2024-12-09 09:30:13.963456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.996 [2024-12-09 09:30:13.963479] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:42.996 [2024-12-09 09:30:13.963489] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:42.996 [2024-12-09 09:30:13.963499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41784 len:8 PRP1 0x0 PRP2 0x0 00:19:42.996 [2024-12-09 09:30:13.963519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.996 [2024-12-09 09:30:13.963534] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:42.996 [2024-12-09 09:30:13.963544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:42.996 [2024-12-09 09:30:13.963555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41792 len:8 PRP1 0x0 PRP2 0x0 00:19:42.996 [2024-12-09 09:30:13.963568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.996 [2024-12-09 09:30:13.963583] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:42.996 [2024-12-09 09:30:13.963593] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:42.996 [2024-12-09 09:30:13.963604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41800 len:8 PRP1 0x0 PRP2 0x0 00:19:42.996 [2024-12-09 09:30:13.963618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.996 [2024-12-09 09:30:13.963632] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:42.996 [2024-12-09 09:30:13.963642] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:42.996 [2024-12-09 09:30:13.963653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41808 len:8 PRP1 0x0 PRP2 0x0 00:19:42.996 [2024-12-09 09:30:13.963667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.996 [2024-12-09 09:30:13.963681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:42.996 [2024-12-09 09:30:13.963691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:42.996 [2024-12-09 09:30:13.963702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41816 len:8 PRP1 0x0 PRP2 0x0 00:19:42.996 [2024-12-09 09:30:13.963715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.996 [2024-12-09 09:30:13.963730] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:42.996 [2024-12-09 09:30:13.963740] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:42.996 [2024-12-09 09:30:13.963751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41824 len:8 PRP1 0x0 PRP2 0x0 00:19:42.996 [2024-12-09 09:30:13.963770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.996 [2024-12-09 09:30:13.963784] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:42.996 [2024-12-09 09:30:13.963794] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:42.996 [2024-12-09 09:30:13.963805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41832 len:8 PRP1 0x0 PRP2 0x0 00:19:42.996 [2024-12-09 09:30:13.963819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.996 [2024-12-09 09:30:13.963833] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:42.996 [2024-12-09 09:30:13.963843] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:42.996 [2024-12-09 09:30:13.963853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41840 len:8 PRP1 0x0 PRP2 0x0 00:19:42.996 [2024-12-09 09:30:13.963867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.996 [2024-12-09 09:30:13.963881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:42.996 [2024-12-09 09:30:13.963902] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:42.996 [2024-12-09 09:30:13.963913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41848 len:8 PRP1 0x0 PRP2 0x0 00:19:42.996 [2024-12-09 09:30:13.963927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.996 [2024-12-09 09:30:13.963941] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:42.996 [2024-12-09 09:30:13.963963] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:42.996 [2024-12-09 09:30:13.963973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41856 len:8 PRP1 0x0 PRP2 0x0 00:19:42.997 [2024-12-09 09:30:13.963985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.997 [2024-12-09 09:30:13.963998] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:42.997 [2024-12-09 09:30:13.964010] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:42.997 [2024-12-09 09:30:13.964020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41864 len:8 PRP1 0x0 PRP2 0x0 00:19:42.997 [2024-12-09 09:30:13.964032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.997 [2024-12-09 09:30:13.964094] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:19:42.997 [2024-12-09 09:30:13.964151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:42.997 [2024-12-09 09:30:13.964167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.997 [2024-12-09 09:30:13.964181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:42.997 [2024-12-09 09:30:13.964194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.997 [2024-12-09 09:30:13.964207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:42.997 [2024-12-09 09:30:13.964220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.997 [2024-12-09 09:30:13.964233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:42.997 [2024-12-09 09:30:13.964271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.997 [2024-12-09 09:30:13.964285] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:19:42.997 [2024-12-09 09:30:13.967360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:19:42.997 [2024-12-09 09:30:13.967400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2054c60 (9): Bad file descriptor 00:19:42.997 [2024-12-09 09:30:13.991005] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:19:42.997 10972.70 IOPS, 42.86 MiB/s [2024-12-09T09:30:20.720Z] 11041.73 IOPS, 43.13 MiB/s [2024-12-09T09:30:20.720Z] 11108.08 IOPS, 43.39 MiB/s [2024-12-09T09:30:20.720Z] 11128.54 IOPS, 43.47 MiB/s [2024-12-09T09:30:20.720Z] 11116.79 IOPS, 43.42 MiB/s [2024-12-09T09:30:20.720Z] 11102.60 IOPS, 43.37 MiB/s 00:19:42.997 Latency(us) 00:19:42.997 [2024-12-09T09:30:20.720Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.997 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:42.997 Verification LBA range: start 0x0 length 0x4000 00:19:42.997 NVMe0n1 : 15.01 11103.47 43.37 291.91 0.00 11208.78 450.72 13423.04 00:19:42.997 [2024-12-09T09:30:20.720Z] =================================================================================================================== 00:19:42.997 [2024-12-09T09:30:20.720Z] Total : 11103.47 43.37 291.91 0.00 11208.78 450.72 13423.04 00:19:42.997 Received shutdown signal, test time was about 15.000000 seconds 00:19:42.997 00:19:42.997 Latency(us) 00:19:42.997 [2024-12-09T09:30:20.720Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.997 [2024-12-09T09:30:20.720Z] =================================================================================================================== 00:19:42.997 [2024-12-09T09:30:20.720Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:42.997 09:30:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:19:42.997 09:30:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:19:42.997 09:30:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:19:42.997 09:30:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=75201 00:19:42.997 09:30:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:19:42.997 09:30:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 75201 /var/tmp/bdevperf.sock 00:19:42.997 09:30:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75201 ']' 00:19:42.997 09:30:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:42.997 09:30:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:42.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:42.997 09:30:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:42.997 09:30:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:42.997 09:30:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:43.562 09:30:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:43.562 09:30:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:19:43.562 09:30:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:43.562 [2024-12-09 09:30:21.276355] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:43.818 09:30:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:19:43.818 [2024-12-09 09:30:21.480215] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:19:43.818 09:30:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:44.382 NVMe0n1 00:19:44.382 09:30:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:44.639 00:19:44.639 09:30:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:44.897 00:19:44.897 09:30:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:44.897 09:30:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:19:45.154 09:30:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:45.154 09:30:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:19:48.436 09:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:48.436 09:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:19:48.436 09:30:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=75272 00:19:48.436 09:30:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:48.436 09:30:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 75272 00:19:49.833 { 00:19:49.833 "results": [ 00:19:49.833 { 00:19:49.833 "job": "NVMe0n1", 00:19:49.833 "core_mask": "0x1", 00:19:49.833 "workload": "verify", 00:19:49.833 "status": "finished", 00:19:49.833 "verify_range": { 00:19:49.833 "start": 0, 00:19:49.833 "length": 16384 00:19:49.833 }, 00:19:49.833 "queue_depth": 128, 00:19:49.833 "io_size": 4096, 00:19:49.833 "runtime": 1.007727, 00:19:49.833 "iops": 9891.567855183, 00:19:49.833 "mibps": 38.638936934308596, 00:19:49.833 "io_failed": 0, 00:19:49.833 "io_timeout": 0, 00:19:49.833 "avg_latency_us": 12869.734738633506, 00:19:49.833 "min_latency_us": 1855.5373493975903, 00:19:49.833 "max_latency_us": 13686.232931726907 00:19:49.833 } 00:19:49.833 ], 00:19:49.833 "core_count": 1 00:19:49.833 } 00:19:49.833 09:30:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:49.833 [2024-12-09 09:30:20.160283] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:19:49.833 [2024-12-09 09:30:20.160360] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75201 ] 00:19:49.833 [2024-12-09 09:30:20.310797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.833 [2024-12-09 09:30:20.365043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.833 [2024-12-09 09:30:20.408441] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:49.833 [2024-12-09 09:30:22.802193] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:19:49.833 [2024-12-09 09:30:22.802280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:49.833 [2024-12-09 09:30:22.802301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:49.833 [2024-12-09 09:30:22.802318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:49.833 [2024-12-09 09:30:22.802331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:49.833 [2024-12-09 09:30:22.802345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:49.833 [2024-12-09 09:30:22.802358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:49.833 [2024-12-09 09:30:22.802371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:49.833 [2024-12-09 09:30:22.802384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:49.833 [2024-12-09 09:30:22.802398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:19:49.833 [2024-12-09 09:30:22.802439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:19:49.833 [2024-12-09 09:30:22.802472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x52dc60 (9): Bad file descriptor 00:19:49.833 [2024-12-09 09:30:22.810373] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:19:49.833 Running I/O for 1 seconds... 00:19:49.833 9840.00 IOPS, 38.44 MiB/s 00:19:49.833 Latency(us) 00:19:49.833 [2024-12-09T09:30:27.556Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:49.833 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:49.833 Verification LBA range: start 0x0 length 0x4000 00:19:49.833 NVMe0n1 : 1.01 9891.57 38.64 0.00 0.00 12869.73 1855.54 13686.23 00:19:49.833 [2024-12-09T09:30:27.556Z] =================================================================================================================== 00:19:49.833 [2024-12-09T09:30:27.556Z] Total : 9891.57 38.64 0.00 0.00 12869.73 1855.54 13686.23 00:19:49.833 09:30:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:49.833 09:30:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:19:49.833 09:30:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:50.160 09:30:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:50.160 09:30:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:19:50.160 09:30:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:50.418 09:30:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:19:53.702 09:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:53.702 09:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:19:53.702 09:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 75201 00:19:53.702 09:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75201 ']' 00:19:53.702 09:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75201 00:19:53.702 09:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:19:53.702 09:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:53.702 09:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75201 00:19:53.702 09:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:53.702 09:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:53.702 09:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75201' 00:19:53.702 killing process with pid 75201 00:19:53.702 09:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75201 00:19:53.702 09:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75201 00:19:53.961 09:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:19:53.961 09:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:54.220 09:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:19:54.220 09:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:54.220 09:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:19:54.220 09:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:54.220 09:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:19:54.220 09:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:54.220 09:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:19:54.220 09:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:54.220 09:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:54.220 rmmod nvme_tcp 00:19:54.220 rmmod nvme_fabrics 00:19:54.220 rmmod nvme_keyring 00:19:54.220 09:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:54.220 09:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:19:54.221 09:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:19:54.221 09:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 74944 ']' 00:19:54.221 09:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 74944 00:19:54.221 09:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 74944 ']' 00:19:54.221 09:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 74944 00:19:54.221 09:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:19:54.221 09:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:54.221 09:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74944 00:19:54.221 09:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:54.221 09:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:54.221 killing process with pid 74944 00:19:54.221 09:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74944' 00:19:54.221 09:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 74944 00:19:54.221 09:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 74944 00:19:54.479 09:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:54.479 09:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:54.479 09:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:54.479 09:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:19:54.480 09:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:19:54.480 09:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:54.480 09:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:19:54.480 09:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:54.480 09:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:54.480 09:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:54.480 09:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:54.480 09:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:54.480 09:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:54.739 09:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:54.739 09:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:54.739 09:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:54.739 09:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:54.739 09:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:54.739 09:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:54.739 09:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:54.739 09:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:54.739 09:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:54.739 09:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:54.739 09:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:54.739 09:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:54.739 09:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:54.739 09:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:19:54.739 00:19:54.739 real 0m32.444s 00:19:54.739 user 2m1.897s 00:19:54.739 sys 0m6.965s 00:19:54.739 09:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:54.739 09:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:54.739 ************************************ 00:19:54.739 END TEST nvmf_failover 00:19:54.739 ************************************ 00:19:54.998 09:30:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:19:54.998 09:30:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:54.998 09:30:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:54.998 09:30:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.998 ************************************ 00:19:54.998 START TEST nvmf_host_discovery 00:19:54.998 ************************************ 00:19:54.998 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:19:54.998 * Looking for test storage... 00:19:54.998 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:54.998 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:54.998 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:19:54.998 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:55.258 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:55.258 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:55.258 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:55.258 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:55.258 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:19:55.258 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:19:55.258 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:19:55.258 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:19:55.258 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:19:55.258 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:19:55.258 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:19:55.258 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:55.258 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:19:55.258 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:19:55.258 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:55.258 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:55.258 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:19:55.258 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:19:55.258 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:55.258 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:19:55.258 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:19:55.258 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:19:55.258 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:19:55.258 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:55.258 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:19:55.258 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:19:55.258 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:55.258 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:55.258 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:19:55.258 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:55.258 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:55.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:55.258 --rc genhtml_branch_coverage=1 00:19:55.258 --rc genhtml_function_coverage=1 00:19:55.258 --rc genhtml_legend=1 00:19:55.258 --rc geninfo_all_blocks=1 00:19:55.258 --rc geninfo_unexecuted_blocks=1 00:19:55.258 00:19:55.258 ' 00:19:55.258 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:55.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:55.258 --rc genhtml_branch_coverage=1 00:19:55.258 --rc genhtml_function_coverage=1 00:19:55.258 --rc genhtml_legend=1 00:19:55.258 --rc geninfo_all_blocks=1 00:19:55.258 --rc geninfo_unexecuted_blocks=1 00:19:55.258 00:19:55.258 ' 00:19:55.258 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:55.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:55.258 --rc genhtml_branch_coverage=1 00:19:55.258 --rc genhtml_function_coverage=1 00:19:55.258 --rc genhtml_legend=1 00:19:55.258 --rc geninfo_all_blocks=1 00:19:55.258 --rc geninfo_unexecuted_blocks=1 00:19:55.258 00:19:55.258 ' 00:19:55.258 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:55.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:55.258 --rc genhtml_branch_coverage=1 00:19:55.258 --rc genhtml_function_coverage=1 00:19:55.258 --rc genhtml_legend=1 00:19:55.258 --rc geninfo_all_blocks=1 00:19:55.258 --rc geninfo_unexecuted_blocks=1 00:19:55.258 00:19:55.258 ' 00:19:55.258 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:55.258 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:19:55.258 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:55.258 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:55.258 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:55.258 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:55.258 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:55.258 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:55.258 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:55.258 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:55.258 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:55.258 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:55.258 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:19:55.258 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:19:55.258 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:55.258 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:55.258 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:55.259 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:55.259 Cannot find device "nvmf_init_br" 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:55.259 Cannot find device "nvmf_init_br2" 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:55.259 Cannot find device "nvmf_tgt_br" 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:55.259 Cannot find device "nvmf_tgt_br2" 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:55.259 Cannot find device "nvmf_init_br" 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:55.259 Cannot find device "nvmf_init_br2" 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:55.259 Cannot find device "nvmf_tgt_br" 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:55.259 Cannot find device "nvmf_tgt_br2" 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:55.259 Cannot find device "nvmf_br" 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:19:55.259 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:55.519 Cannot find device "nvmf_init_if" 00:19:55.519 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:19:55.519 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:55.519 Cannot find device "nvmf_init_if2" 00:19:55.519 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:19:55.519 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:55.519 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:55.519 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:19:55.519 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:55.519 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:55.519 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:19:55.519 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:55.519 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:55.519 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:55.519 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:55.519 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:55.519 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:55.519 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:55.519 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:55.519 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:55.519 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:55.519 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:55.519 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:55.519 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:55.519 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:55.519 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:55.519 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:55.519 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:55.519 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:55.519 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:55.519 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:55.519 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:55.519 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:55.519 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:55.519 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:55.519 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:55.519 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:55.519 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:55.519 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:55.519 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:55.519 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:55.519 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:55.519 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:55.519 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:55.519 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:55.520 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:19:55.520 00:19:55.520 --- 10.0.0.3 ping statistics --- 00:19:55.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:55.520 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:19:55.520 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:55.520 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:55.520 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.030 ms 00:19:55.520 00:19:55.520 --- 10.0.0.4 ping statistics --- 00:19:55.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:55.520 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:19:55.779 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:55.779 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:55.779 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:19:55.779 00:19:55.779 --- 10.0.0.1 ping statistics --- 00:19:55.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:55.779 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:19:55.779 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:55.779 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:55.779 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:19:55.779 00:19:55.779 --- 10.0.0.2 ping statistics --- 00:19:55.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:55.779 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:19:55.779 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:55.779 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:19:55.779 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:55.779 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:55.779 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:55.779 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:55.779 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:55.779 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:55.779 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:55.779 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:19:55.779 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:55.779 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:55.779 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:55.779 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=75602 00:19:55.779 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:55.779 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 75602 00:19:55.779 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 75602 ']' 00:19:55.779 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:55.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:55.779 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:55.779 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:55.779 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:55.779 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:55.779 [2024-12-09 09:30:33.343303] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:19:55.779 [2024-12-09 09:30:33.343383] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:55.779 [2024-12-09 09:30:33.495887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.038 [2024-12-09 09:30:33.546112] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:56.038 [2024-12-09 09:30:33.546173] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:56.038 [2024-12-09 09:30:33.546182] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:56.038 [2024-12-09 09:30:33.546190] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:56.038 [2024-12-09 09:30:33.546198] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:56.038 [2024-12-09 09:30:33.546508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:56.038 [2024-12-09 09:30:33.587790] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:56.606 09:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:56.606 09:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:19:56.606 09:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:56.606 09:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:56.606 09:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:56.606 09:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:56.606 09:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:56.606 09:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.606 09:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:56.606 [2024-12-09 09:30:34.280327] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:56.606 09:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.606 09:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:19:56.606 09:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.606 09:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:56.606 [2024-12-09 09:30:34.292435] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:19:56.606 09:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.606 09:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:19:56.606 09:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.606 09:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:56.606 null0 00:19:56.606 09:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.606 09:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:19:56.606 09:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.606 09:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:56.606 null1 00:19:56.606 09:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.606 09:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:19:56.606 09:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.606 09:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:56.865 09:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.865 09:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=75634 00:19:56.865 09:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:19:56.865 09:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 75634 /tmp/host.sock 00:19:56.865 09:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 75634 ']' 00:19:56.865 09:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:19:56.865 09:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:56.865 09:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:56.865 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:56.865 09:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:56.865 09:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:56.865 [2024-12-09 09:30:34.388372] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:19:56.865 [2024-12-09 09:30:34.388857] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75634 ] 00:19:56.865 [2024-12-09 09:30:34.542695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.123 [2024-12-09 09:30:34.595524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:57.123 [2024-12-09 09:30:34.636874] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:57.690 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:57.690 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:19:57.690 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:57.690 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:19:57.690 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.690 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:57.690 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.690 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:19:57.690 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.690 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:57.690 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.690 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:19:57.690 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:19:57.690 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:57.690 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:57.690 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:57.690 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.690 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:57.690 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:57.690 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.690 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:19:57.690 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:19:57.690 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:57.690 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:57.690 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:57.690 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.690 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:57.690 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:57.690 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.690 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:19:57.690 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:19:57.690 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.690 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:57.690 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.690 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:19:57.690 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:57.690 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:57.690 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.690 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:57.690 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:57.690 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:57.690 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.948 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:19:57.948 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:19:57.948 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:57.948 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:57.948 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:57.948 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:57.948 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.948 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:57.948 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.948 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:19:57.948 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:19:57.948 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.948 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:57.948 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.948 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:19:57.948 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:57.948 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.948 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:57.948 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:57.948 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:57.948 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:57.948 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.948 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:19:57.948 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:19:57.948 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:57.948 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.948 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:57.948 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:57.948 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:57.948 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:57.948 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.948 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:19:57.948 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:57.948 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.948 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:57.948 [2024-12-09 09:30:35.598598] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:57.948 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.948 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:19:57.948 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:57.948 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.948 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:57.948 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:57.948 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:57.948 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:57.948 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.948 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:19:57.948 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:19:57.948 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:57.948 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:57.948 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:57.948 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:57.948 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.948 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:58.207 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.207 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:19:58.207 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:19:58.207 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:19:58.207 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:58.207 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:58.207 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:58.207 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:58.207 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:58.207 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:19:58.207 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:58.207 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:19:58.207 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.207 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:58.207 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.207 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:19:58.207 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:19:58.207 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:19:58.207 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:58.207 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:19:58.207 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.207 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:58.207 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.207 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:58.207 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:58.207 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:58.207 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:58.207 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:58.207 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:19:58.207 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:58.207 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:58.207 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.207 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:58.207 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:58.207 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:58.207 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.207 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:19:58.207 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:19:58.775 [2024-12-09 09:30:36.280301] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:58.775 [2024-12-09 09:30:36.280333] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:58.775 [2024-12-09 09:30:36.280354] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:58.775 [2024-12-09 09:30:36.286330] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:19:58.775 [2024-12-09 09:30:36.340631] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:19:58.775 [2024-12-09 09:30:36.341819] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x250dda0:1 started. 00:19:58.775 [2024-12-09 09:30:36.343839] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:19:58.775 [2024-12-09 09:30:36.344007] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:19:58.775 [2024-12-09 09:30:36.348930] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x250dda0 was disconnected and freed. delete nvme_qpair. 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:59.383 [2024-12-09 09:30:36.941119] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x251c190:1 started. 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:59.383 [2024-12-09 09:30:36.948861] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x251c190 was disconnected and freed. delete nvme_qpair. 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:19:59.383 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:59.384 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.384 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:59.384 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.384 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:19:59.384 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:19:59.384 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:19:59.384 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:59.384 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:19:59.384 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.384 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:59.384 [2024-12-09 09:30:37.023734] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:59.384 [2024-12-09 09:30:37.024278] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:59.384 [2024-12-09 09:30:37.024306] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:59.384 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.384 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:59.384 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:59.384 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:59.384 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:59.384 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:59.384 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:19:59.384 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:59.384 [2024-12-09 09:30:37.030252] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:19:59.384 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:59.384 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:59.384 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:59.384 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.384 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:59.384 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.384 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.384 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:59.384 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:59.384 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:59.384 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:59.384 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:59.384 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:59.384 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:19:59.384 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:59.384 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:59.384 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.384 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:59.384 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:59.384 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:59.384 [2024-12-09 09:30:37.092985] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:19:59.384 [2024-12-09 09:30:37.093040] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:19:59.384 [2024-12-09 09:30:37.093051] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:19:59.384 [2024-12-09 09:30:37.093057] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:19:59.384 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.384 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:59.384 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:59.384 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:19:59.384 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:19:59.384 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:59.384 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:59.384 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:19:59.384 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:19:59.384 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:59.384 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:59.384 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.384 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:59.384 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:59.642 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:59.642 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.642 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:19:59.642 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:59.642 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:19:59.642 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:19:59.642 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:59.642 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:59.642 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:59.642 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:59.642 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:59.642 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:19:59.642 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:59.642 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:59.642 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.642 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:59.642 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.642 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:19:59.642 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:19:59.642 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:19:59.642 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:59.642 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:59.643 [2024-12-09 09:30:37.168150] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:59.643 [2024-12-09 09:30:37.168182] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:59.643 [2024-12-09 09:30:37.169397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.643 [2024-12-09 09:30:37.169435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.643 [2024-12-09 09:30:37.169448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.643 [2024-12-09 09:30:37.169458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.643 [2024-12-09 09:30:37.169651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.643 [2024-12-09 09:30:37.169781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.643 [2024-12-09 09:30:37.169837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.643 [2024-12-09 09:30:37.169938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.643 [2024-12-09 09:30:37.169988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e9fb0 is same with the state(6) to be set 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:59.643 [2024-12-09 09:30:37.174135] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:19:59.643 [2024-12-09 09:30:37.174274] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:19:59.643 [2024-12-09 09:30:37.174337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e9fb0 (9): Bad file descriptor 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:59.643 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.902 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:19:59.902 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:59.902 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:19:59.902 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:19:59.902 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:59.902 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:59.902 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:19:59.902 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:19:59.902 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:59.902 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:59.902 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:59.902 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.902 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:59.902 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:59.902 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.902 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:19:59.902 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:59.902 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:19:59.902 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:19:59.902 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:59.902 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:59.902 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:59.902 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:59.902 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:59.902 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:19:59.902 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:59.902 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.902 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:59.902 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:59.902 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.902 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:19:59.902 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:19:59.902 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:19:59.902 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:59.902 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:59.902 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.902 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:00.834 [2024-12-09 09:30:38.437347] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:00.834 [2024-12-09 09:30:38.437564] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:00.834 [2024-12-09 09:30:38.437625] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:00.834 [2024-12-09 09:30:38.443379] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:20:00.834 [2024-12-09 09:30:38.501837] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:20:00.834 [2024-12-09 09:30:38.502825] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x2519210:1 started. 00:20:00.834 [2024-12-09 09:30:38.505046] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:20:00.834 [2024-12-09 09:30:38.505236] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:20:00.834 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.834 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:00.834 [2024-12-09 09:30:38.506541] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x2519210 was disconnected and freed. delete nvme_qpair. 00:20:00.834 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:20:00.834 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:00.834 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:00.834 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:00.834 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:00.834 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:00.834 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:00.834 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.834 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:00.834 request: 00:20:00.834 { 00:20:00.834 "name": "nvme", 00:20:00.834 "trtype": "tcp", 00:20:00.834 "traddr": "10.0.0.3", 00:20:00.834 "adrfam": "ipv4", 00:20:00.834 "trsvcid": "8009", 00:20:00.834 "hostnqn": "nqn.2021-12.io.spdk:test", 00:20:00.834 "wait_for_attach": true, 00:20:00.834 "method": "bdev_nvme_start_discovery", 00:20:00.834 "req_id": 1 00:20:00.834 } 00:20:00.834 Got JSON-RPC error response 00:20:00.834 response: 00:20:00.834 { 00:20:00.834 "code": -17, 00:20:00.834 "message": "File exists" 00:20:00.834 } 00:20:00.834 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:00.834 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:20:00.834 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:00.834 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:00.834 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:00.834 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:20:00.834 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:20:00.834 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:00.834 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.834 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:20:00.834 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:00.834 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:20:00.834 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.834 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:20:01.092 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:20:01.092 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:01.092 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:01.092 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.092 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:01.092 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:01.092 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:01.092 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.092 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:01.092 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:01.092 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:20:01.092 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:01.092 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:01.092 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:01.092 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:01.092 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:01.092 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:01.092 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.092 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:01.092 request: 00:20:01.092 { 00:20:01.092 "name": "nvme_second", 00:20:01.092 "trtype": "tcp", 00:20:01.092 "traddr": "10.0.0.3", 00:20:01.092 "adrfam": "ipv4", 00:20:01.092 "trsvcid": "8009", 00:20:01.092 "hostnqn": "nqn.2021-12.io.spdk:test", 00:20:01.092 "wait_for_attach": true, 00:20:01.092 "method": "bdev_nvme_start_discovery", 00:20:01.092 "req_id": 1 00:20:01.092 } 00:20:01.092 Got JSON-RPC error response 00:20:01.092 response: 00:20:01.092 { 00:20:01.092 "code": -17, 00:20:01.092 "message": "File exists" 00:20:01.092 } 00:20:01.092 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:01.092 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:20:01.092 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:01.092 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:01.092 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:01.092 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:20:01.093 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:01.093 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.093 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:20:01.093 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:01.093 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:20:01.093 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:20:01.093 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.093 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:20:01.093 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:20:01.093 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:01.093 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.093 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:01.093 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:01.093 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:01.093 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:01.093 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.093 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:01.093 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:20:01.093 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:20:01.093 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:20:01.093 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:01.093 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:01.093 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:01.093 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:01.093 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:20:01.093 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.093 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:02.025 [2024-12-09 09:30:39.683715] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:02.025 [2024-12-09 09:30:39.683778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e8fb0 with addr=10.0.0.3, port=8010 00:20:02.025 [2024-12-09 09:30:39.683803] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:20:02.025 [2024-12-09 09:30:39.683813] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:02.025 [2024-12-09 09:30:39.683823] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:20:03.395 [2024-12-09 09:30:40.682114] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:03.395 [2024-12-09 09:30:40.682170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e8fb0 with addr=10.0.0.3, port=8010 00:20:03.395 [2024-12-09 09:30:40.682192] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:20:03.395 [2024-12-09 09:30:40.682203] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:03.395 [2024-12-09 09:30:40.682213] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:20:03.962 [2024-12-09 09:30:41.680361] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:20:04.234 request: 00:20:04.234 { 00:20:04.234 "name": "nvme_second", 00:20:04.234 "trtype": "tcp", 00:20:04.234 "traddr": "10.0.0.3", 00:20:04.234 "adrfam": "ipv4", 00:20:04.234 "trsvcid": "8010", 00:20:04.234 "hostnqn": "nqn.2021-12.io.spdk:test", 00:20:04.234 "wait_for_attach": false, 00:20:04.234 "attach_timeout_ms": 3000, 00:20:04.234 "method": "bdev_nvme_start_discovery", 00:20:04.234 "req_id": 1 00:20:04.234 } 00:20:04.234 Got JSON-RPC error response 00:20:04.234 response: 00:20:04.234 { 00:20:04.234 "code": -110, 00:20:04.234 "message": "Connection timed out" 00:20:04.234 } 00:20:04.234 09:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:04.234 09:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:20:04.234 09:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:04.234 09:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:04.234 09:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:04.234 09:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:20:04.234 09:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:20:04.234 09:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:04.234 09:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:20:04.234 09:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.234 09:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:20:04.234 09:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.234 09:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.234 09:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:20:04.234 09:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:20:04.234 09:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 75634 00:20:04.234 09:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:20:04.234 09:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:04.234 09:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:20:04.234 09:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:04.234 09:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:20:04.234 09:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:04.234 09:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:04.234 rmmod nvme_tcp 00:20:04.234 rmmod nvme_fabrics 00:20:04.234 rmmod nvme_keyring 00:20:04.234 09:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:04.234 09:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:20:04.234 09:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:20:04.234 09:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 75602 ']' 00:20:04.234 09:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 75602 00:20:04.234 09:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 75602 ']' 00:20:04.234 09:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 75602 00:20:04.234 09:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:20:04.234 09:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:04.234 09:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75602 00:20:04.234 09:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:04.234 09:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:04.234 killing process with pid 75602 00:20:04.234 09:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75602' 00:20:04.234 09:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 75602 00:20:04.234 09:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 75602 00:20:04.492 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:04.492 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:04.492 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:04.492 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:20:04.492 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:04.492 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:20:04.492 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:20:04.492 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:04.492 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:04.492 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:04.492 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:04.492 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:04.492 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:04.492 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:04.492 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:04.492 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:04.492 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:04.492 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:04.492 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:04.807 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:04.807 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:04.807 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:04.807 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:04.807 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.807 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:04.807 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:04.807 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:20:04.807 00:20:04.807 real 0m9.814s 00:20:04.807 user 0m17.085s 00:20:04.807 sys 0m2.378s 00:20:04.807 ************************************ 00:20:04.807 END TEST nvmf_host_discovery 00:20:04.807 ************************************ 00:20:04.807 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:04.807 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.807 09:30:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:20:04.807 09:30:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:04.807 09:30:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:04.807 09:30:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.807 ************************************ 00:20:04.807 START TEST nvmf_host_multipath_status 00:20:04.807 ************************************ 00:20:04.807 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:20:04.807 * Looking for test storage... 00:20:05.065 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:05.065 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:05.065 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:20:05.065 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:05.065 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:05.065 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:05.065 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:05.065 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:05.065 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:20:05.065 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:20:05.065 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:20:05.065 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:20:05.065 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:20:05.065 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:20:05.065 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:20:05.065 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:05.065 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:20:05.065 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:20:05.065 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:05.065 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:05.065 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:20:05.065 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:20:05.065 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:05.065 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:20:05.065 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:20:05.065 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:20:05.065 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:20:05.065 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:05.065 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:20:05.065 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:20:05.065 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:05.065 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:05.065 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:20:05.065 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:05.065 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:05.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:05.065 --rc genhtml_branch_coverage=1 00:20:05.065 --rc genhtml_function_coverage=1 00:20:05.065 --rc genhtml_legend=1 00:20:05.065 --rc geninfo_all_blocks=1 00:20:05.065 --rc geninfo_unexecuted_blocks=1 00:20:05.065 00:20:05.065 ' 00:20:05.065 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:05.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:05.065 --rc genhtml_branch_coverage=1 00:20:05.065 --rc genhtml_function_coverage=1 00:20:05.065 --rc genhtml_legend=1 00:20:05.065 --rc geninfo_all_blocks=1 00:20:05.065 --rc geninfo_unexecuted_blocks=1 00:20:05.065 00:20:05.065 ' 00:20:05.065 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:05.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:05.065 --rc genhtml_branch_coverage=1 00:20:05.065 --rc genhtml_function_coverage=1 00:20:05.065 --rc genhtml_legend=1 00:20:05.065 --rc geninfo_all_blocks=1 00:20:05.065 --rc geninfo_unexecuted_blocks=1 00:20:05.065 00:20:05.065 ' 00:20:05.065 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:05.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:05.065 --rc genhtml_branch_coverage=1 00:20:05.065 --rc genhtml_function_coverage=1 00:20:05.065 --rc genhtml_legend=1 00:20:05.065 --rc geninfo_all_blocks=1 00:20:05.065 --rc geninfo_unexecuted_blocks=1 00:20:05.065 00:20:05.065 ' 00:20:05.065 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:05.065 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:20:05.065 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:05.065 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:05.065 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:05.065 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:05.065 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:05.065 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:05.065 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:05.065 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:05.066 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:05.066 Cannot find device "nvmf_init_br" 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:05.066 Cannot find device "nvmf_init_br2" 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:05.066 Cannot find device "nvmf_tgt_br" 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:05.066 Cannot find device "nvmf_tgt_br2" 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:05.066 Cannot find device "nvmf_init_br" 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:20:05.066 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:05.324 Cannot find device "nvmf_init_br2" 00:20:05.324 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:20:05.324 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:05.324 Cannot find device "nvmf_tgt_br" 00:20:05.324 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:20:05.324 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:05.324 Cannot find device "nvmf_tgt_br2" 00:20:05.324 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:20:05.324 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:05.324 Cannot find device "nvmf_br" 00:20:05.324 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:20:05.324 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:05.324 Cannot find device "nvmf_init_if" 00:20:05.324 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:20:05.324 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:05.324 Cannot find device "nvmf_init_if2" 00:20:05.324 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:20:05.324 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:05.324 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:05.324 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:20:05.324 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:05.325 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:05.325 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:20:05.325 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:05.325 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:05.325 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:05.325 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:05.325 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:05.325 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:05.325 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:05.325 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:05.325 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:05.325 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:05.325 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:05.325 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:05.325 09:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:05.325 09:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:05.325 09:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:05.325 09:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:05.325 09:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:05.325 09:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:05.325 09:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:05.325 09:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:05.325 09:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:05.325 09:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:05.325 09:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:05.583 09:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:05.583 09:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:05.583 09:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:05.583 09:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:05.583 09:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:05.583 09:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:05.583 09:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:05.583 09:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:05.583 09:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:05.583 09:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:05.583 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:05.583 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:20:05.583 00:20:05.583 --- 10.0.0.3 ping statistics --- 00:20:05.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:05.583 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:20:05.583 09:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:05.583 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:05.583 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:20:05.583 00:20:05.583 --- 10.0.0.4 ping statistics --- 00:20:05.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:05.583 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:20:05.583 09:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:05.583 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:05.583 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:20:05.583 00:20:05.583 --- 10.0.0.1 ping statistics --- 00:20:05.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:05.583 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:20:05.583 09:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:05.583 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:05.583 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:20:05.583 00:20:05.583 --- 10.0.0.2 ping statistics --- 00:20:05.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:05.583 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:20:05.583 09:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:05.583 09:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:20:05.583 09:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:05.583 09:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:05.583 09:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:05.583 09:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:05.583 09:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:05.583 09:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:05.583 09:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:05.583 09:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:20:05.583 09:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:05.583 09:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:05.583 09:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:05.583 09:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=76140 00:20:05.583 09:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:05.583 09:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 76140 00:20:05.583 09:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 76140 ']' 00:20:05.583 09:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:05.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:05.583 09:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:05.583 09:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:05.583 09:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:05.583 09:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:05.583 [2024-12-09 09:30:43.265277] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:20:05.583 [2024-12-09 09:30:43.265348] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:05.840 [2024-12-09 09:30:43.420036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:05.840 [2024-12-09 09:30:43.469760] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:05.840 [2024-12-09 09:30:43.469946] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:05.840 [2024-12-09 09:30:43.470112] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:05.840 [2024-12-09 09:30:43.470163] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:05.840 [2024-12-09 09:30:43.470191] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:05.840 [2024-12-09 09:30:43.471097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:05.840 [2024-12-09 09:30:43.471098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.840 [2024-12-09 09:30:43.514856] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:06.773 09:30:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:06.773 09:30:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:20:06.773 09:30:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:06.773 09:30:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:06.773 09:30:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:06.773 09:30:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:06.773 09:30:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76140 00:20:06.773 09:30:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:06.773 [2024-12-09 09:30:44.417583] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:06.773 09:30:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:07.032 Malloc0 00:20:07.032 09:30:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:20:07.291 09:30:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:07.548 09:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:07.805 [2024-12-09 09:30:45.326892] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:07.805 09:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:20:08.064 [2024-12-09 09:30:45.558696] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:20:08.064 09:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76190 00:20:08.064 09:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:20:08.064 09:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:08.064 09:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76190 /var/tmp/bdevperf.sock 00:20:08.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:08.064 09:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 76190 ']' 00:20:08.064 09:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:08.064 09:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:08.064 09:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:08.064 09:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:08.064 09:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:08.998 09:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:08.998 09:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:20:08.998 09:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:09.256 09:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:20:09.514 Nvme0n1 00:20:09.514 09:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:20:09.797 Nvme0n1 00:20:09.797 09:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:20:09.797 09:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:20:12.331 09:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:20:12.331 09:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:20:12.331 09:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:12.331 09:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:20:13.262 09:30:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:20:13.262 09:30:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:13.262 09:30:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:13.262 09:30:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:13.519 09:30:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:13.519 09:30:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:13.519 09:30:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:13.519 09:30:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:13.778 09:30:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:13.778 09:30:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:13.778 09:30:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:13.778 09:30:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:14.036 09:30:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:14.036 09:30:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:14.036 09:30:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:14.036 09:30:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:14.036 09:30:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:14.036 09:30:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:14.036 09:30:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:14.036 09:30:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:14.294 09:30:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:14.294 09:30:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:14.294 09:30:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:14.294 09:30:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:14.553 09:30:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:14.553 09:30:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:20:14.553 09:30:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:14.811 09:30:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:15.070 09:30:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:20:16.008 09:30:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:20:16.008 09:30:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:16.008 09:30:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:16.008 09:30:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:16.269 09:30:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:16.269 09:30:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:16.269 09:30:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:16.269 09:30:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:16.527 09:30:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:16.527 09:30:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:16.527 09:30:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:16.527 09:30:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:16.786 09:30:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:16.786 09:30:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:16.786 09:30:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:16.786 09:30:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:17.045 09:30:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:17.045 09:30:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:17.045 09:30:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:17.045 09:30:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:17.046 09:30:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:17.046 09:30:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:17.046 09:30:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:17.046 09:30:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:17.304 09:30:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:17.304 09:30:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:20:17.304 09:30:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:17.567 09:30:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:20:17.829 09:30:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:20:18.762 09:30:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:20:18.762 09:30:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:18.762 09:30:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:18.762 09:30:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:19.020 09:30:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:19.020 09:30:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:19.020 09:30:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:19.020 09:30:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:19.278 09:30:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:19.278 09:30:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:19.278 09:30:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:19.278 09:30:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:19.536 09:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:19.536 09:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:19.536 09:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:19.536 09:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:19.794 09:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:19.794 09:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:19.794 09:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:19.794 09:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:20.052 09:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:20.052 09:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:20.052 09:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:20.052 09:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:20.052 09:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:20.052 09:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:20:20.052 09:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:20.311 09:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:20:20.585 09:30:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:20:21.520 09:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:20:21.521 09:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:21.521 09:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:21.521 09:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:21.778 09:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:21.778 09:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:21.778 09:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:21.778 09:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:22.036 09:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:22.036 09:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:22.036 09:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:22.036 09:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:22.320 09:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:22.320 09:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:22.320 09:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:22.320 09:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:22.578 09:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:22.578 09:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:22.578 09:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:22.578 09:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:22.837 09:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:22.837 09:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:20:22.837 09:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:22.837 09:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:23.097 09:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:23.097 09:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:20:23.097 09:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:20:23.356 09:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:20:23.356 09:31:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:20:24.734 09:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:20:24.734 09:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:24.734 09:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:24.734 09:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:24.734 09:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:24.734 09:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:24.734 09:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:24.734 09:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:24.993 09:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:24.993 09:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:24.993 09:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:24.993 09:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:25.253 09:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:25.253 09:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:25.253 09:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:25.253 09:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:25.512 09:31:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:25.512 09:31:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:20:25.512 09:31:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:25.512 09:31:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:25.771 09:31:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:25.771 09:31:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:20:25.771 09:31:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:25.771 09:31:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:26.028 09:31:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:26.028 09:31:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:20:26.028 09:31:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:20:26.286 09:31:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:26.286 09:31:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:20:27.661 09:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:20:27.661 09:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:27.661 09:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:27.662 09:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:27.662 09:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:27.662 09:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:27.662 09:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:27.662 09:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:27.940 09:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:27.941 09:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:27.941 09:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:27.941 09:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:27.941 09:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:27.941 09:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:27.941 09:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:27.941 09:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:28.249 09:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:28.249 09:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:20:28.249 09:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:28.249 09:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:28.507 09:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:28.507 09:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:28.507 09:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:28.507 09:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:28.766 09:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:28.766 09:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:20:28.766 09:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:20:28.766 09:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:20:29.024 09:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:29.282 09:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:20:30.219 09:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:20:30.219 09:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:30.219 09:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:30.219 09:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:30.477 09:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:30.477 09:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:30.477 09:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:30.477 09:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:30.735 09:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:30.735 09:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:30.735 09:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:30.735 09:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:30.993 09:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:30.993 09:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:30.993 09:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:30.993 09:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:31.251 09:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:31.251 09:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:31.251 09:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:31.251 09:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:31.510 09:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:31.510 09:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:31.510 09:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:31.510 09:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:31.510 09:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:31.510 09:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:20:31.510 09:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:31.768 09:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:32.027 09:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:20:32.962 09:31:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:20:32.962 09:31:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:32.962 09:31:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:32.962 09:31:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:33.220 09:31:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:33.220 09:31:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:33.220 09:31:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:33.220 09:31:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:33.478 09:31:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:33.478 09:31:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:33.478 09:31:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:33.478 09:31:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:33.736 09:31:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:33.736 09:31:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:33.736 09:31:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:33.736 09:31:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:33.994 09:31:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:33.994 09:31:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:33.994 09:31:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:33.994 09:31:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:34.253 09:31:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:34.253 09:31:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:34.253 09:31:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:34.253 09:31:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:34.512 09:31:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:34.512 09:31:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:20:34.512 09:31:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:34.512 09:31:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:20:34.769 09:31:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:20:35.702 09:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:20:35.702 09:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:35.702 09:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:35.702 09:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:36.267 09:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:36.267 09:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:36.267 09:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:36.267 09:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:36.267 09:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:36.267 09:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:36.267 09:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:36.267 09:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:36.523 09:31:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:36.524 09:31:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:36.524 09:31:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:36.524 09:31:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:36.781 09:31:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:36.781 09:31:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:36.781 09:31:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:36.781 09:31:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:37.039 09:31:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:37.039 09:31:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:37.039 09:31:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:37.039 09:31:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:37.298 09:31:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:37.298 09:31:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:20:37.298 09:31:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:37.298 09:31:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:20:37.611 09:31:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:20:38.546 09:31:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:20:38.546 09:31:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:38.546 09:31:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:38.546 09:31:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:38.804 09:31:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:38.804 09:31:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:38.804 09:31:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:38.804 09:31:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:39.061 09:31:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:39.061 09:31:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:39.061 09:31:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:39.061 09:31:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:39.319 09:31:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:39.319 09:31:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:39.319 09:31:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:39.319 09:31:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:39.578 09:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:39.578 09:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:39.578 09:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:39.578 09:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:39.837 09:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:39.837 09:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:20:39.837 09:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:39.837 09:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:39.837 09:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:39.837 09:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76190 00:20:39.837 09:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 76190 ']' 00:20:39.837 09:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 76190 00:20:39.837 09:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:20:39.837 09:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:39.837 09:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76190 00:20:40.098 09:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:40.098 09:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:40.098 killing process with pid 76190 00:20:40.098 09:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76190' 00:20:40.098 09:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 76190 00:20:40.098 09:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 76190 00:20:40.098 { 00:20:40.098 "results": [ 00:20:40.098 { 00:20:40.098 "job": "Nvme0n1", 00:20:40.098 "core_mask": "0x4", 00:20:40.098 "workload": "verify", 00:20:40.098 "status": "terminated", 00:20:40.098 "verify_range": { 00:20:40.098 "start": 0, 00:20:40.098 "length": 16384 00:20:40.098 }, 00:20:40.098 "queue_depth": 128, 00:20:40.098 "io_size": 4096, 00:20:40.098 "runtime": 30.110107, 00:20:40.098 "iops": 10377.080360425156, 00:20:40.098 "mibps": 40.535470157910765, 00:20:40.098 "io_failed": 0, 00:20:40.098 "io_timeout": 0, 00:20:40.098 "avg_latency_us": 12308.015007760474, 00:20:40.098 "min_latency_us": 565.873092369478, 00:20:40.098 "max_latency_us": 4015751.2995983935 00:20:40.098 } 00:20:40.098 ], 00:20:40.098 "core_count": 1 00:20:40.098 } 00:20:40.098 09:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76190 00:20:40.098 09:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:40.098 [2024-12-09 09:30:45.630000] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:20:40.098 [2024-12-09 09:30:45.630094] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76190 ] 00:20:40.098 [2024-12-09 09:30:45.783456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.098 [2024-12-09 09:30:45.836196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:40.098 [2024-12-09 09:30:45.879356] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:40.098 Running I/O for 90 seconds... 00:20:40.098 8740.00 IOPS, 34.14 MiB/s [2024-12-09T09:31:17.821Z] 9042.00 IOPS, 35.32 MiB/s [2024-12-09T09:31:17.821Z] 9139.67 IOPS, 35.70 MiB/s [2024-12-09T09:31:17.821Z] 9673.00 IOPS, 37.79 MiB/s [2024-12-09T09:31:17.821Z] 10076.20 IOPS, 39.36 MiB/s [2024-12-09T09:31:17.821Z] 10212.83 IOPS, 39.89 MiB/s [2024-12-09T09:31:17.821Z] 10323.00 IOPS, 40.32 MiB/s [2024-12-09T09:31:17.821Z] 10389.25 IOPS, 40.58 MiB/s [2024-12-09T09:31:17.821Z] 10598.44 IOPS, 41.40 MiB/s [2024-12-09T09:31:17.821Z] 10759.40 IOPS, 42.03 MiB/s [2024-12-09T09:31:17.821Z] 10838.73 IOPS, 42.34 MiB/s [2024-12-09T09:31:17.821Z] 10871.50 IOPS, 42.47 MiB/s [2024-12-09T09:31:17.821Z] 10898.00 IOPS, 42.57 MiB/s [2024-12-09T09:31:17.821Z] [2024-12-09 09:31:00.822865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:113424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.098 [2024-12-09 09:31:00.822931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:40.098 [2024-12-09 09:31:00.822980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:113432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.098 [2024-12-09 09:31:00.822997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:40.098 [2024-12-09 09:31:00.823017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:113440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.098 [2024-12-09 09:31:00.823032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:40.098 [2024-12-09 09:31:00.823052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:113448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.098 [2024-12-09 09:31:00.823066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:40.098 [2024-12-09 09:31:00.823086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:113456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.098 [2024-12-09 09:31:00.823100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.098 [2024-12-09 09:31:00.823120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:113464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.098 [2024-12-09 09:31:00.823135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:40.098 [2024-12-09 09:31:00.823155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:113472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.098 [2024-12-09 09:31:00.823168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:40.098 [2024-12-09 09:31:00.823188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:113480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.098 [2024-12-09 09:31:00.823202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:40.098 [2024-12-09 09:31:00.823222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:112848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.098 [2024-12-09 09:31:00.823236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:40.098 [2024-12-09 09:31:00.823280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:112856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.098 [2024-12-09 09:31:00.823295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:40.098 [2024-12-09 09:31:00.823315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:112864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.098 [2024-12-09 09:31:00.823339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:40.098 [2024-12-09 09:31:00.823359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:112872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.098 [2024-12-09 09:31:00.823372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:40.099 [2024-12-09 09:31:00.823392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:112880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.099 [2024-12-09 09:31:00.823406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:40.099 [2024-12-09 09:31:00.823425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:112888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.099 [2024-12-09 09:31:00.823450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:40.099 [2024-12-09 09:31:00.823469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:112896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.099 [2024-12-09 09:31:00.823492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:40.099 [2024-12-09 09:31:00.823511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:112904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.099 [2024-12-09 09:31:00.823524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:40.099 [2024-12-09 09:31:00.823544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:112912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.099 [2024-12-09 09:31:00.823557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:40.099 [2024-12-09 09:31:00.823576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:112920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.099 [2024-12-09 09:31:00.823589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:40.099 [2024-12-09 09:31:00.823608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:112928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.099 [2024-12-09 09:31:00.823628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:40.099 [2024-12-09 09:31:00.823657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.099 [2024-12-09 09:31:00.823671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:40.099 [2024-12-09 09:31:00.823690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:112944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.099 [2024-12-09 09:31:00.823705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:40.099 [2024-12-09 09:31:00.823732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:112952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.099 [2024-12-09 09:31:00.823745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:40.099 [2024-12-09 09:31:00.823764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:112960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.099 [2024-12-09 09:31:00.823777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:40.099 [2024-12-09 09:31:00.823796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:112968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.099 [2024-12-09 09:31:00.823810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:40.099 [2024-12-09 09:31:00.823832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:113488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.099 [2024-12-09 09:31:00.823846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:40.099 [2024-12-09 09:31:00.823864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:113496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.099 [2024-12-09 09:31:00.823877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:40.099 [2024-12-09 09:31:00.823896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:113504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.099 [2024-12-09 09:31:00.823909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:40.099 [2024-12-09 09:31:00.823928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.099 [2024-12-09 09:31:00.823942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:40.099 [2024-12-09 09:31:00.823960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:113520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.099 [2024-12-09 09:31:00.823974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:40.099 [2024-12-09 09:31:00.823992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:113528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.099 [2024-12-09 09:31:00.824006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:40.099 [2024-12-09 09:31:00.824041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:113536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.099 [2024-12-09 09:31:00.824056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:40.099 [2024-12-09 09:31:00.824076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:113544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.099 [2024-12-09 09:31:00.824090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:40.099 [2024-12-09 09:31:00.824111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:113552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.099 [2024-12-09 09:31:00.824126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:40.099 [2024-12-09 09:31:00.824145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:113560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.099 [2024-12-09 09:31:00.824166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:40.099 [2024-12-09 09:31:00.824185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.099 [2024-12-09 09:31:00.824216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:40.099 [2024-12-09 09:31:00.824236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:113576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.099 [2024-12-09 09:31:00.824250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:40.099 [2024-12-09 09:31:00.824270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:113584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.099 [2024-12-09 09:31:00.824285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.099 [2024-12-09 09:31:00.824306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:113592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.099 [2024-12-09 09:31:00.824320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:40.099 [2024-12-09 09:31:00.824340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:113600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.099 [2024-12-09 09:31:00.824354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:40.099 [2024-12-09 09:31:00.824375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:113608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.099 [2024-12-09 09:31:00.824389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:40.099 [2024-12-09 09:31:00.824409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:112976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.099 [2024-12-09 09:31:00.824423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:40.099 [2024-12-09 09:31:00.824444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:112984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.099 [2024-12-09 09:31:00.824458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:40.099 [2024-12-09 09:31:00.824478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:112992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.099 [2024-12-09 09:31:00.824501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:40.099 [2024-12-09 09:31:00.824522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:113000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.099 [2024-12-09 09:31:00.824536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:40.099 [2024-12-09 09:31:00.824557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:113008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.099 [2024-12-09 09:31:00.824571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:40.099 [2024-12-09 09:31:00.824591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:113016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.099 [2024-12-09 09:31:00.824614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:40.099 [2024-12-09 09:31:00.824635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:113024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.099 [2024-12-09 09:31:00.824649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:40.099 [2024-12-09 09:31:00.824670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:113032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.099 [2024-12-09 09:31:00.824684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:40.099 [2024-12-09 09:31:00.824704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:113040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.099 [2024-12-09 09:31:00.824719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:40.099 [2024-12-09 09:31:00.824740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:113048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.099 [2024-12-09 09:31:00.824754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:40.099 [2024-12-09 09:31:00.824774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:113056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.099 [2024-12-09 09:31:00.824789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:40.099 [2024-12-09 09:31:00.824810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:113064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.099 [2024-12-09 09:31:00.824824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:40.099 [2024-12-09 09:31:00.824844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:113072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.099 [2024-12-09 09:31:00.824858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:40.099 [2024-12-09 09:31:00.824878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:113080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.099 [2024-12-09 09:31:00.824893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:40.099 [2024-12-09 09:31:00.824913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.099 [2024-12-09 09:31:00.824927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:40.099 [2024-12-09 09:31:00.824948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:113096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.099 [2024-12-09 09:31:00.824973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:40.099 [2024-12-09 09:31:00.824995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:113616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.099 [2024-12-09 09:31:00.825009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:40.099 [2024-12-09 09:31:00.825029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:113624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.099 [2024-12-09 09:31:00.825048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:40.099 [2024-12-09 09:31:00.825068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:113632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.099 [2024-12-09 09:31:00.825082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:40.099 [2024-12-09 09:31:00.825101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:113640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.099 [2024-12-09 09:31:00.825115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:40.099 [2024-12-09 09:31:00.825135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:113648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.099 [2024-12-09 09:31:00.825149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:40.099 [2024-12-09 09:31:00.825169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.099 [2024-12-09 09:31:00.825194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:40.099 [2024-12-09 09:31:00.825213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:113664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.099 [2024-12-09 09:31:00.825226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:40.099 [2024-12-09 09:31:00.825245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:113672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.099 [2024-12-09 09:31:00.825258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:40.099 [2024-12-09 09:31:00.825286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:113680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.100 [2024-12-09 09:31:00.825301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:40.100 [2024-12-09 09:31:00.825320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:113688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.100 [2024-12-09 09:31:00.825333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:40.100 [2024-12-09 09:31:00.825352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:113696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.100 [2024-12-09 09:31:00.825365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:40.100 [2024-12-09 09:31:00.825384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:113704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.100 [2024-12-09 09:31:00.825398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:40.100 [2024-12-09 09:31:00.825416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:113712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.100 [2024-12-09 09:31:00.825429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.100 [2024-12-09 09:31:00.825447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.100 [2024-12-09 09:31:00.825461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:40.100 [2024-12-09 09:31:00.825495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:113728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.100 [2024-12-09 09:31:00.825517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:40.100 [2024-12-09 09:31:00.825536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:113736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.100 [2024-12-09 09:31:00.825566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:40.100 [2024-12-09 09:31:00.825586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:113104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.100 [2024-12-09 09:31:00.825600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:40.100 [2024-12-09 09:31:00.825636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:113112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.100 [2024-12-09 09:31:00.825650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:40.100 [2024-12-09 09:31:00.825671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:113120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.100 [2024-12-09 09:31:00.825685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:40.100 [2024-12-09 09:31:00.825705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.100 [2024-12-09 09:31:00.825719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:40.100 [2024-12-09 09:31:00.825740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:113136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.100 [2024-12-09 09:31:00.825754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:40.100 [2024-12-09 09:31:00.825774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:113144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.100 [2024-12-09 09:31:00.825789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:40.100 [2024-12-09 09:31:00.825808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:113152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.100 [2024-12-09 09:31:00.825823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:40.100 [2024-12-09 09:31:00.825843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:113160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.100 [2024-12-09 09:31:00.825857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:40.100 [2024-12-09 09:31:00.825878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:113168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.100 [2024-12-09 09:31:00.825892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:40.100 [2024-12-09 09:31:00.825912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:113176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.100 [2024-12-09 09:31:00.825927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:40.100 [2024-12-09 09:31:00.825952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:113184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.100 [2024-12-09 09:31:00.825967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:40.100 [2024-12-09 09:31:00.825988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:113192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.100 [2024-12-09 09:31:00.826002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:40.100 [2024-12-09 09:31:00.826022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:113200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.100 [2024-12-09 09:31:00.826036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:40.100 [2024-12-09 09:31:00.826065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:113208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.100 [2024-12-09 09:31:00.826080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:40.100 [2024-12-09 09:31:00.826100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:113216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.100 [2024-12-09 09:31:00.826116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:40.100 [2024-12-09 09:31:00.826137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:113224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.100 [2024-12-09 09:31:00.826151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:40.100 [2024-12-09 09:31:00.826175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:113744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.100 [2024-12-09 09:31:00.826190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:40.100 [2024-12-09 09:31:00.826210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:113752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.100 [2024-12-09 09:31:00.826224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:40.100 [2024-12-09 09:31:00.826244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:113760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.100 [2024-12-09 09:31:00.826258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:40.100 [2024-12-09 09:31:00.826278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:113768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.100 [2024-12-09 09:31:00.826293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:40.100 [2024-12-09 09:31:00.826313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:113776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.100 [2024-12-09 09:31:00.826327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:40.100 [2024-12-09 09:31:00.826347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:113784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.100 [2024-12-09 09:31:00.826362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:40.100 [2024-12-09 09:31:00.826382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:113792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.100 [2024-12-09 09:31:00.826402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:40.100 [2024-12-09 09:31:00.826423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:113800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.100 [2024-12-09 09:31:00.826437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:40.100 [2024-12-09 09:31:00.826470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:113232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.100 [2024-12-09 09:31:00.826485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:40.100 [2024-12-09 09:31:00.826506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:113240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.100 [2024-12-09 09:31:00.826520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:40.100 [2024-12-09 09:31:00.826540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:113248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.100 [2024-12-09 09:31:00.826554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.100 [2024-12-09 09:31:00.826574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:113256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.100 [2024-12-09 09:31:00.826589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.100 [2024-12-09 09:31:00.826609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:113264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.100 [2024-12-09 09:31:00.826624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.100 [2024-12-09 09:31:00.826645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:113272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.100 [2024-12-09 09:31:00.826659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:40.100 [2024-12-09 09:31:00.826679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:113280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.100 [2024-12-09 09:31:00.826695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:40.100 [2024-12-09 09:31:00.826715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:113288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.100 [2024-12-09 09:31:00.826730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:40.100 [2024-12-09 09:31:00.826750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:113296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.100 [2024-12-09 09:31:00.826764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:40.100 [2024-12-09 09:31:00.826784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:113304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.100 [2024-12-09 09:31:00.826806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:40.100 [2024-12-09 09:31:00.826833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:113312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.100 [2024-12-09 09:31:00.826858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:40.100 [2024-12-09 09:31:00.826878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:113320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.100 [2024-12-09 09:31:00.826893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:40.100 [2024-12-09 09:31:00.826913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:113328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.100 [2024-12-09 09:31:00.826927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:40.100 [2024-12-09 09:31:00.826947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:113336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.100 [2024-12-09 09:31:00.826962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:40.100 [2024-12-09 09:31:00.826982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:113344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.100 [2024-12-09 09:31:00.826996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:40.100 [2024-12-09 09:31:00.827016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:113352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.100 [2024-12-09 09:31:00.827031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:40.100 [2024-12-09 09:31:00.827052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:113360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.100 [2024-12-09 09:31:00.827067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:40.100 [2024-12-09 09:31:00.827087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:113368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.100 [2024-12-09 09:31:00.827101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:40.100 [2024-12-09 09:31:00.827121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:113376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.100 [2024-12-09 09:31:00.827135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:40.100 [2024-12-09 09:31:00.827166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:113384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.101 [2024-12-09 09:31:00.827180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:40.101 [2024-12-09 09:31:00.827200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.101 [2024-12-09 09:31:00.827213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:40.101 [2024-12-09 09:31:00.827233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:113400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.101 [2024-12-09 09:31:00.827247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:40.101 [2024-12-09 09:31:00.827267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:113408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.101 [2024-12-09 09:31:00.827305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:40.101 [2024-12-09 09:31:00.827867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:113416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.101 [2024-12-09 09:31:00.827891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:40.101 [2024-12-09 09:31:00.827921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:113808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.101 [2024-12-09 09:31:00.827936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:40.101 [2024-12-09 09:31:00.827962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.101 [2024-12-09 09:31:00.827977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:40.101 [2024-12-09 09:31:00.828003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:113824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.101 [2024-12-09 09:31:00.828018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:40.101 [2024-12-09 09:31:00.828044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:113832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.101 [2024-12-09 09:31:00.828058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:40.101 [2024-12-09 09:31:00.828085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:113840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.101 [2024-12-09 09:31:00.828100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:40.101 [2024-12-09 09:31:00.828126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:113848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.101 [2024-12-09 09:31:00.828140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:40.101 [2024-12-09 09:31:00.828177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:113856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.101 [2024-12-09 09:31:00.828192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:40.101 [2024-12-09 09:31:00.828238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:113864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.101 [2024-12-09 09:31:00.828266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:40.101 10369.86 IOPS, 40.51 MiB/s [2024-12-09T09:31:17.824Z] 9678.53 IOPS, 37.81 MiB/s [2024-12-09T09:31:17.824Z] 9073.62 IOPS, 35.44 MiB/s [2024-12-09T09:31:17.824Z] 8539.88 IOPS, 33.36 MiB/s [2024-12-09T09:31:17.824Z] 8513.39 IOPS, 33.26 MiB/s [2024-12-09T09:31:17.824Z] 8686.37 IOPS, 33.93 MiB/s [2024-12-09T09:31:17.824Z] 9003.75 IOPS, 35.17 MiB/s [2024-12-09T09:31:17.824Z] 9333.86 IOPS, 36.46 MiB/s [2024-12-09T09:31:17.824Z] 9625.86 IOPS, 37.60 MiB/s [2024-12-09T09:31:17.824Z] 9677.96 IOPS, 37.80 MiB/s [2024-12-09T09:31:17.824Z] 9723.71 IOPS, 37.98 MiB/s [2024-12-09T09:31:17.824Z] 9782.36 IOPS, 38.21 MiB/s [2024-12-09T09:31:17.824Z] 9957.54 IOPS, 38.90 MiB/s [2024-12-09T09:31:17.824Z] 10123.26 IOPS, 39.54 MiB/s [2024-12-09T09:31:17.824Z] [2024-12-09 09:31:15.153683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:44928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.101 [2024-12-09 09:31:15.153764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.101 [2024-12-09 09:31:15.153808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:44944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.101 [2024-12-09 09:31:15.153847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:40.101 [2024-12-09 09:31:15.153866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.101 [2024-12-09 09:31:15.153879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:40.101 [2024-12-09 09:31:15.153897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:44976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.101 [2024-12-09 09:31:15.153910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:40.101 [2024-12-09 09:31:15.153928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:44560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.101 [2024-12-09 09:31:15.153941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:40.101 [2024-12-09 09:31:15.153959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:44592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.101 [2024-12-09 09:31:15.153972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:40.101 [2024-12-09 09:31:15.153990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:44624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.101 [2024-12-09 09:31:15.154003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:40.101 [2024-12-09 09:31:15.154021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:44656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.101 [2024-12-09 09:31:15.154034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:40.101 [2024-12-09 09:31:15.154060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.101 [2024-12-09 09:31:15.154074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:40.101 [2024-12-09 09:31:15.154092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:45008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.101 [2024-12-09 09:31:15.154104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:40.101 [2024-12-09 09:31:15.154122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:45024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.101 [2024-12-09 09:31:15.154135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:40.101 [2024-12-09 09:31:15.154153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:45040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.101 [2024-12-09 09:31:15.154166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:40.101 [2024-12-09 09:31:15.154184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:44424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.101 [2024-12-09 09:31:15.154197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:40.101 [2024-12-09 09:31:15.154215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:44456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.101 [2024-12-09 09:31:15.154234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:40.101 [2024-12-09 09:31:15.154253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:44488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.101 [2024-12-09 09:31:15.154266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:40.101 [2024-12-09 09:31:15.154285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:44520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.101 [2024-12-09 09:31:15.154298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:40.101 [2024-12-09 09:31:15.154341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:45056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.101 [2024-12-09 09:31:15.154355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:40.101 [2024-12-09 09:31:15.154376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:45072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.101 [2024-12-09 09:31:15.154389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:40.101 [2024-12-09 09:31:15.154407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:45088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.101 [2024-12-09 09:31:15.154420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:40.101 [2024-12-09 09:31:15.154438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:45104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.101 [2024-12-09 09:31:15.154451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:40.101 [2024-12-09 09:31:15.154479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:44688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.101 [2024-12-09 09:31:15.154493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:40.101 [2024-12-09 09:31:15.154511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:44720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.101 [2024-12-09 09:31:15.154524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:40.101 [2024-12-09 09:31:15.154542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:44752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.101 [2024-12-09 09:31:15.154555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:40.101 [2024-12-09 09:31:15.154573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:44784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.101 [2024-12-09 09:31:15.154586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:40.101 [2024-12-09 09:31:15.154604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:45120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.101 [2024-12-09 09:31:15.154616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:40.101 [2024-12-09 09:31:15.154634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:45136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.101 [2024-12-09 09:31:15.154647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:40.101 [2024-12-09 09:31:15.154673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.101 [2024-12-09 09:31:15.154686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:40.101 [2024-12-09 09:31:15.154704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:45168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.101 [2024-12-09 09:31:15.154717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:40.101 [2024-12-09 09:31:15.154735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:44552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.101 [2024-12-09 09:31:15.154748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:40.101 [2024-12-09 09:31:15.154766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:44584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.101 [2024-12-09 09:31:15.154779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:40.101 [2024-12-09 09:31:15.154797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:44616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.101 [2024-12-09 09:31:15.154810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:40.101 [2024-12-09 09:31:15.154828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:44648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.101 [2024-12-09 09:31:15.154841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:40.101 [2024-12-09 09:31:15.154861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.101 [2024-12-09 09:31:15.154875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.102 [2024-12-09 09:31:15.154894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:45200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.102 [2024-12-09 09:31:15.154907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:40.102 [2024-12-09 09:31:15.154925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:45216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.102 [2024-12-09 09:31:15.154938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:40.102 [2024-12-09 09:31:15.154956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:45232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.102 [2024-12-09 09:31:15.154969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:40.102 [2024-12-09 09:31:15.154987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:44816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.102 [2024-12-09 09:31:15.155000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:40.102 [2024-12-09 09:31:15.155018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:44848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.102 [2024-12-09 09:31:15.155031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:40.102 [2024-12-09 09:31:15.155054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:44880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.102 [2024-12-09 09:31:15.155067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:40.102 [2024-12-09 09:31:15.155086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:44912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.102 [2024-12-09 09:31:15.155099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:40.102 [2024-12-09 09:31:15.155959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:45248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.102 [2024-12-09 09:31:15.155988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:40.102 [2024-12-09 09:31:15.156011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:45264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.102 [2024-12-09 09:31:15.156025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:40.102 [2024-12-09 09:31:15.156043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:45280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.102 [2024-12-09 09:31:15.156056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:40.102 [2024-12-09 09:31:15.156075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:45296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.102 [2024-12-09 09:31:15.156087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:40.102 [2024-12-09 09:31:15.156106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:44680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.102 [2024-12-09 09:31:15.156118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:40.102 [2024-12-09 09:31:15.156137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:44712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.102 [2024-12-09 09:31:15.156150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:40.102 [2024-12-09 09:31:15.156168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:44744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.102 [2024-12-09 09:31:15.156180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:40.102 [2024-12-09 09:31:15.156199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:44776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.102 [2024-12-09 09:31:15.156211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:40.102 [2024-12-09 09:31:15.156230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:45312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.102 [2024-12-09 09:31:15.156243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:40.102 [2024-12-09 09:31:15.156261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:45328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.102 [2024-12-09 09:31:15.156274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:40.102 [2024-12-09 09:31:15.156292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:45344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.102 [2024-12-09 09:31:15.156315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:40.102 [2024-12-09 09:31:15.156345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:45360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.102 [2024-12-09 09:31:15.156359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:40.102 10292.25 IOPS, 40.20 MiB/s [2024-12-09T09:31:17.825Z] 10348.38 IOPS, 40.42 MiB/s [2024-12-09T09:31:17.825Z] 10376.23 IOPS, 40.53 MiB/s [2024-12-09T09:31:17.825Z] Received shutdown signal, test time was about 30.110831 seconds 00:20:40.102 00:20:40.102 Latency(us) 00:20:40.102 [2024-12-09T09:31:17.825Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:40.102 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:40.102 Verification LBA range: start 0x0 length 0x4000 00:20:40.102 Nvme0n1 : 30.11 10377.08 40.54 0.00 0.00 12308.02 565.87 4015751.30 00:20:40.102 [2024-12-09T09:31:17.825Z] =================================================================================================================== 00:20:40.102 [2024-12-09T09:31:17.825Z] Total : 10377.08 40.54 0.00 0.00 12308.02 565.87 4015751.30 00:20:40.102 09:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:40.359 09:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:20:40.359 09:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:40.359 09:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:20:40.359 09:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:40.359 09:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:20:40.359 09:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:40.359 09:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:20:40.359 09:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:40.359 09:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:40.359 rmmod nvme_tcp 00:20:40.359 rmmod nvme_fabrics 00:20:40.359 rmmod nvme_keyring 00:20:40.617 09:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:40.617 09:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:20:40.617 09:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:20:40.617 09:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 76140 ']' 00:20:40.617 09:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 76140 00:20:40.617 09:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 76140 ']' 00:20:40.617 09:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 76140 00:20:40.617 09:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:20:40.617 09:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:40.617 09:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76140 00:20:40.617 killing process with pid 76140 00:20:40.617 09:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:40.617 09:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:40.617 09:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76140' 00:20:40.617 09:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 76140 00:20:40.617 09:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 76140 00:20:40.874 09:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:40.874 09:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:40.874 09:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:40.874 09:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:20:40.874 09:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:20:40.874 09:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:40.874 09:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:20:40.874 09:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:40.874 09:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:40.874 09:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:40.874 09:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:40.874 09:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:40.874 09:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:40.874 09:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:40.874 09:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:40.874 09:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:40.874 09:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:40.874 09:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:40.874 09:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:40.874 09:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:40.874 09:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:40.874 09:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:40.874 09:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:40.874 09:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.874 09:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:40.874 09:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:41.133 09:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:20:41.133 00:20:41.133 real 0m36.238s 00:20:41.133 user 1m51.108s 00:20:41.133 sys 0m13.677s 00:20:41.133 09:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:41.133 09:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:41.133 ************************************ 00:20:41.133 END TEST nvmf_host_multipath_status 00:20:41.133 ************************************ 00:20:41.133 09:31:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:20:41.133 09:31:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:41.133 09:31:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:41.133 09:31:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.133 ************************************ 00:20:41.133 START TEST nvmf_discovery_remove_ifc 00:20:41.133 ************************************ 00:20:41.133 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:20:41.133 * Looking for test storage... 00:20:41.393 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:41.393 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:41.393 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:20:41.393 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:41.393 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:41.393 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:41.393 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:41.393 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:41.393 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:20:41.393 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:20:41.393 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:20:41.393 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:20:41.393 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:20:41.393 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:20:41.393 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:20:41.393 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:41.393 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:20:41.393 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:20:41.393 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:41.393 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:41.393 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:20:41.393 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:20:41.393 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:41.393 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:20:41.393 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:20:41.393 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:20:41.393 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:20:41.393 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:41.393 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:20:41.393 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:20:41.393 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:41.393 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:41.393 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:20:41.393 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:41.393 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:41.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.393 --rc genhtml_branch_coverage=1 00:20:41.393 --rc genhtml_function_coverage=1 00:20:41.393 --rc genhtml_legend=1 00:20:41.393 --rc geninfo_all_blocks=1 00:20:41.393 --rc geninfo_unexecuted_blocks=1 00:20:41.393 00:20:41.393 ' 00:20:41.393 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:41.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.393 --rc genhtml_branch_coverage=1 00:20:41.393 --rc genhtml_function_coverage=1 00:20:41.393 --rc genhtml_legend=1 00:20:41.393 --rc geninfo_all_blocks=1 00:20:41.393 --rc geninfo_unexecuted_blocks=1 00:20:41.393 00:20:41.393 ' 00:20:41.393 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:41.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.394 --rc genhtml_branch_coverage=1 00:20:41.394 --rc genhtml_function_coverage=1 00:20:41.394 --rc genhtml_legend=1 00:20:41.394 --rc geninfo_all_blocks=1 00:20:41.394 --rc geninfo_unexecuted_blocks=1 00:20:41.394 00:20:41.394 ' 00:20:41.394 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:41.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.394 --rc genhtml_branch_coverage=1 00:20:41.394 --rc genhtml_function_coverage=1 00:20:41.394 --rc genhtml_legend=1 00:20:41.394 --rc geninfo_all_blocks=1 00:20:41.394 --rc geninfo_unexecuted_blocks=1 00:20:41.394 00:20:41.394 ' 00:20:41.394 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:41.394 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:20:41.394 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:41.394 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:41.394 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:41.394 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:41.394 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:41.394 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:41.394 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:41.394 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:41.394 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:41.394 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:41.394 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:20:41.394 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:20:41.394 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:41.394 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:41.394 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:41.394 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:41.394 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:41.394 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:20:41.394 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:41.394 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:41.394 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:41.394 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.394 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.394 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.394 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:20:41.394 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.394 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:20:41.394 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:41.394 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:41.394 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:41.394 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:41.394 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:41.394 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:41.394 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:41.394 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:41.394 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:41.394 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:41.394 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:20:41.394 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:20:41.394 09:31:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:20:41.394 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:20:41.394 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:20:41.394 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:20:41.394 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:20:41.394 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:41.394 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:41.394 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:41.394 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:41.394 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:41.394 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:41.394 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:41.394 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:41.394 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:41.394 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:41.394 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:41.394 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:41.394 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:41.394 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:41.394 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:41.394 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:41.394 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:41.394 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:41.394 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:41.394 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:41.394 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:41.394 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:41.394 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:41.394 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:41.394 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:41.394 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:41.394 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:41.394 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:41.394 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:41.394 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:41.394 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:41.394 Cannot find device "nvmf_init_br" 00:20:41.394 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:20:41.394 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:41.394 Cannot find device "nvmf_init_br2" 00:20:41.394 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:20:41.394 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:41.394 Cannot find device "nvmf_tgt_br" 00:20:41.394 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:20:41.395 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:41.395 Cannot find device "nvmf_tgt_br2" 00:20:41.395 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:20:41.395 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:41.395 Cannot find device "nvmf_init_br" 00:20:41.395 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:20:41.395 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:41.654 Cannot find device "nvmf_init_br2" 00:20:41.654 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:20:41.654 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:41.654 Cannot find device "nvmf_tgt_br" 00:20:41.654 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:20:41.654 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:41.654 Cannot find device "nvmf_tgt_br2" 00:20:41.654 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:20:41.654 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:41.654 Cannot find device "nvmf_br" 00:20:41.654 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:20:41.654 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:41.654 Cannot find device "nvmf_init_if" 00:20:41.654 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:20:41.654 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:41.654 Cannot find device "nvmf_init_if2" 00:20:41.654 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:20:41.654 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:41.654 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:41.654 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:20:41.654 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:41.654 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:41.654 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:20:41.654 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:41.654 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:41.654 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:41.654 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:41.654 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:41.654 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:41.654 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:41.654 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:41.654 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:41.654 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:41.654 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:41.654 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:41.654 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:41.654 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:41.654 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:41.654 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:41.654 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:41.654 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:41.654 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:41.914 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:41.914 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:41.914 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:41.914 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:41.914 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:41.914 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:41.914 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:41.914 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:41.914 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:41.914 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:41.914 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:41.914 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:41.914 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:41.914 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:41.914 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:41.914 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:20:41.914 00:20:41.914 --- 10.0.0.3 ping statistics --- 00:20:41.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:41.914 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:20:41.914 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:41.914 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:41.914 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.110 ms 00:20:41.914 00:20:41.914 --- 10.0.0.4 ping statistics --- 00:20:41.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:41.914 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:20:41.914 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:41.914 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:41.914 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:20:41.914 00:20:41.914 --- 10.0.0.1 ping statistics --- 00:20:41.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:41.914 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:20:41.914 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:41.914 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:41.914 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:20:41.914 00:20:41.914 --- 10.0.0.2 ping statistics --- 00:20:41.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:41.914 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:20:41.914 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:41.914 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:20:41.914 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:41.914 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:41.914 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:41.914 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:41.914 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:41.914 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:41.915 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:41.915 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:20:41.915 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:41.915 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:41.915 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:41.915 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=77003 00:20:41.915 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:41.915 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 77003 00:20:41.915 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 77003 ']' 00:20:41.915 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:41.915 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:41.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:41.915 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:41.915 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:41.915 09:31:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:41.915 [2024-12-09 09:31:19.618439] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:20:41.915 [2024-12-09 09:31:19.618533] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:42.175 [2024-12-09 09:31:19.769489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.175 [2024-12-09 09:31:19.816523] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:42.175 [2024-12-09 09:31:19.816575] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:42.175 [2024-12-09 09:31:19.816585] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:42.175 [2024-12-09 09:31:19.816593] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:42.175 [2024-12-09 09:31:19.816601] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:42.175 [2024-12-09 09:31:19.816903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:42.175 [2024-12-09 09:31:19.859462] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:43.114 09:31:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:43.114 09:31:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:20:43.114 09:31:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:43.114 09:31:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:43.114 09:31:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:43.114 09:31:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:43.114 09:31:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:20:43.114 09:31:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.114 09:31:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:43.114 [2024-12-09 09:31:20.572323] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:43.114 [2024-12-09 09:31:20.580452] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:20:43.114 null0 00:20:43.114 [2024-12-09 09:31:20.612343] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:43.114 09:31:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.114 09:31:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77035 00:20:43.114 09:31:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:20:43.114 09:31:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77035 /tmp/host.sock 00:20:43.114 09:31:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 77035 ']' 00:20:43.114 09:31:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:20:43.114 09:31:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:43.114 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:20:43.114 09:31:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:20:43.114 09:31:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:43.114 09:31:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:43.114 [2024-12-09 09:31:20.689079] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:20:43.114 [2024-12-09 09:31:20.689165] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77035 ] 00:20:43.373 [2024-12-09 09:31:20.839468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.373 [2024-12-09 09:31:20.888751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:43.991 09:31:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:43.991 09:31:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:20:43.991 09:31:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:43.991 09:31:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:20:43.991 09:31:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.991 09:31:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:43.991 09:31:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.991 09:31:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:20:43.991 09:31:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.991 09:31:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:43.991 [2024-12-09 09:31:21.621020] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:43.991 09:31:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.991 09:31:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:20:43.991 09:31:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.991 09:31:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:45.367 [2024-12-09 09:31:22.671655] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:45.367 [2024-12-09 09:31:22.671678] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:45.367 [2024-12-09 09:31:22.671696] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:45.367 [2024-12-09 09:31:22.677682] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:20:45.367 [2024-12-09 09:31:22.731914] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:20:45.367 [2024-12-09 09:31:22.732776] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1cd6f00:1 started. 00:20:45.367 [2024-12-09 09:31:22.734364] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:20:45.367 [2024-12-09 09:31:22.734411] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:20:45.367 [2024-12-09 09:31:22.734431] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:20:45.367 [2024-12-09 09:31:22.734446] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:20:45.367 [2024-12-09 09:31:22.734480] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:20:45.367 09:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.367 09:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:20:45.367 09:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:45.367 [2024-12-09 09:31:22.740221] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1cd6f00 was disconnected and freed. delete nvme_qpair. 00:20:45.367 09:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:45.367 09:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:45.367 09:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:45.367 09:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:45.367 09:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.367 09:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:45.367 09:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.367 09:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:20:45.367 09:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:20:45.367 09:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:20:45.367 09:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:20:45.368 09:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:45.368 09:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:45.368 09:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:45.368 09:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:45.368 09:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:45.368 09:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.368 09:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:45.368 09:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.368 09:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:45.368 09:31:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:46.304 09:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:46.304 09:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:46.304 09:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.304 09:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:46.304 09:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:46.304 09:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:46.304 09:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:46.304 09:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.304 09:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:46.304 09:31:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:47.253 09:31:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:47.253 09:31:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:47.253 09:31:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.253 09:31:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:47.253 09:31:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:47.253 09:31:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:47.253 09:31:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:47.253 09:31:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.253 09:31:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:47.253 09:31:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:48.628 09:31:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:48.628 09:31:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:48.628 09:31:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.628 09:31:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:48.628 09:31:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:48.628 09:31:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:48.628 09:31:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:48.628 09:31:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.628 09:31:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:48.628 09:31:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:49.563 09:31:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:49.563 09:31:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:49.563 09:31:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.563 09:31:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:49.563 09:31:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:49.563 09:31:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:49.563 09:31:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:49.563 09:31:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.563 09:31:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:49.563 09:31:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:50.535 09:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:50.535 09:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:50.535 09:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:50.535 09:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.535 09:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:50.535 09:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:50.535 09:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:50.535 09:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.535 09:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:50.535 09:31:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:50.535 [2024-12-09 09:31:28.153515] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:20:50.535 [2024-12-09 09:31:28.153590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:50.535 [2024-12-09 09:31:28.153605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.535 [2024-12-09 09:31:28.153621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:50.535 [2024-12-09 09:31:28.153631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.535 [2024-12-09 09:31:28.153642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:50.535 [2024-12-09 09:31:28.153652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.535 [2024-12-09 09:31:28.153662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:50.535 [2024-12-09 09:31:28.153671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.535 [2024-12-09 09:31:28.153681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:50.535 [2024-12-09 09:31:28.153690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.535 [2024-12-09 09:31:28.153701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb2fc0 is same with the state(6) to be set 00:20:50.535 [2024-12-09 09:31:28.163491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb2fc0 (9): Bad file descriptor 00:20:50.535 [2024-12-09 09:31:28.173499] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:20:50.535 [2024-12-09 09:31:28.173515] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:20:50.535 [2024-12-09 09:31:28.173521] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:20:50.535 [2024-12-09 09:31:28.173528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:20:50.535 [2024-12-09 09:31:28.173568] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:20:51.497 09:31:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:51.497 09:31:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:51.497 09:31:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:51.497 09:31:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.497 09:31:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:51.497 09:31:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:51.497 09:31:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:51.497 [2024-12-09 09:31:29.211696] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:20:51.497 [2024-12-09 09:31:29.211864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb2fc0 with addr=10.0.0.3, port=4420 00:20:51.497 [2024-12-09 09:31:29.211917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb2fc0 is same with the state(6) to be set 00:20:51.497 [2024-12-09 09:31:29.212023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb2fc0 (9): Bad file descriptor 00:20:51.497 [2024-12-09 09:31:29.213148] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:20:51.497 [2024-12-09 09:31:29.213257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:20:51.497 [2024-12-09 09:31:29.213290] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:20:51.497 [2024-12-09 09:31:29.213323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:20:51.497 [2024-12-09 09:31:29.213350] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:20:51.497 [2024-12-09 09:31:29.213370] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:20:51.497 [2024-12-09 09:31:29.213388] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:20:51.497 [2024-12-09 09:31:29.213421] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:20:51.497 [2024-12-09 09:31:29.213440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:20:51.755 09:31:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.755 09:31:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:51.755 09:31:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:52.691 [2024-12-09 09:31:30.211954] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:20:52.691 [2024-12-09 09:31:30.212012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:20:52.691 [2024-12-09 09:31:30.212044] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:20:52.691 [2024-12-09 09:31:30.212055] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:20:52.691 [2024-12-09 09:31:30.212066] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:20:52.691 [2024-12-09 09:31:30.212077] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:20:52.691 [2024-12-09 09:31:30.212085] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:20:52.691 [2024-12-09 09:31:30.212091] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:20:52.691 [2024-12-09 09:31:30.212159] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:20:52.691 [2024-12-09 09:31:30.212226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.691 [2024-12-09 09:31:30.212242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.691 [2024-12-09 09:31:30.212259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.691 [2024-12-09 09:31:30.212268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.691 [2024-12-09 09:31:30.212280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.691 [2024-12-09 09:31:30.212290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.691 [2024-12-09 09:31:30.212300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.691 [2024-12-09 09:31:30.212309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.691 [2024-12-09 09:31:30.212320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.691 [2024-12-09 09:31:30.212329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.691 [2024-12-09 09:31:30.212339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:20:52.691 [2024-12-09 09:31:30.212386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3ea20 (9): Bad file descriptor 00:20:52.691 [2024-12-09 09:31:30.213371] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:20:52.691 [2024-12-09 09:31:30.213390] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:20:52.691 09:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:52.691 09:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:52.691 09:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:52.691 09:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.691 09:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:52.691 09:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:52.691 09:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:52.691 09:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.691 09:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:20:52.691 09:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:52.691 09:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:52.691 09:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:20:52.691 09:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:52.691 09:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:52.691 09:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:52.691 09:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:52.691 09:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:52.691 09:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.691 09:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:52.691 09:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.691 09:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:20:52.691 09:31:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:54.063 09:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:54.063 09:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:54.063 09:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.063 09:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:54.063 09:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:54.063 09:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:54.063 09:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:54.063 09:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.063 09:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:20:54.063 09:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:54.628 [2024-12-09 09:31:32.213285] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:54.628 [2024-12-09 09:31:32.213325] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:54.628 [2024-12-09 09:31:32.213343] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:54.628 [2024-12-09 09:31:32.219310] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:20:54.628 [2024-12-09 09:31:32.273611] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:20:54.628 [2024-12-09 09:31:32.274634] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1cdf1d0:1 started. 00:20:54.628 [2024-12-09 09:31:32.275992] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:20:54.628 [2024-12-09 09:31:32.276054] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:20:54.628 [2024-12-09 09:31:32.276078] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:20:54.628 [2024-12-09 09:31:32.276099] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:20:54.628 [2024-12-09 09:31:32.276111] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:20:54.628 [2024-12-09 09:31:32.281991] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1cdf1d0 was disconnected and freed. delete nvme_qpair. 00:20:54.886 09:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:54.886 09:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:54.886 09:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:54.886 09:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:54.886 09:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.886 09:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:54.886 09:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:54.886 09:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.886 09:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:20:54.886 09:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:20:54.886 09:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77035 00:20:54.886 09:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 77035 ']' 00:20:54.886 09:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 77035 00:20:54.886 09:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:20:54.886 09:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:54.886 09:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77035 00:20:54.886 killing process with pid 77035 00:20:54.886 09:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:54.886 09:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:54.886 09:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77035' 00:20:54.886 09:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 77035 00:20:54.886 09:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 77035 00:20:55.144 09:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:20:55.144 09:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:55.144 09:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:20:55.401 09:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:55.401 09:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:20:55.401 09:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:55.401 09:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:55.401 rmmod nvme_tcp 00:20:55.401 rmmod nvme_fabrics 00:20:55.401 rmmod nvme_keyring 00:20:55.401 09:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:55.401 09:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:20:55.401 09:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:20:55.401 09:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 77003 ']' 00:20:55.401 09:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 77003 00:20:55.401 09:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 77003 ']' 00:20:55.401 09:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 77003 00:20:55.401 09:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:20:55.401 09:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:55.401 09:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77003 00:20:55.401 killing process with pid 77003 00:20:55.401 09:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:55.401 09:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:55.401 09:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77003' 00:20:55.401 09:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 77003 00:20:55.401 09:31:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 77003 00:20:55.658 09:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:55.658 09:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:55.658 09:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:55.658 09:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:20:55.658 09:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:20:55.658 09:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:20:55.658 09:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:55.658 09:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:55.658 09:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:55.658 09:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:55.658 09:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:55.658 09:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:55.658 09:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:55.658 09:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:55.658 09:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:55.658 09:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:55.658 09:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:55.658 09:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:55.658 09:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:55.658 09:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:55.658 09:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:55.658 09:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:55.916 09:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:55.916 09:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:55.916 09:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:55.916 09:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:55.916 09:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:20:55.916 00:20:55.916 real 0m14.713s 00:20:55.916 user 0m23.848s 00:20:55.916 sys 0m3.545s 00:20:55.916 09:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:55.916 09:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:55.916 ************************************ 00:20:55.916 END TEST nvmf_discovery_remove_ifc 00:20:55.916 ************************************ 00:20:55.916 09:31:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:20:55.916 09:31:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:55.916 09:31:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:55.916 09:31:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.916 ************************************ 00:20:55.916 START TEST nvmf_identify_kernel_target 00:20:55.916 ************************************ 00:20:55.916 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:20:55.916 * Looking for test storage... 00:20:55.916 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:55.916 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:55.916 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:55.916 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:20:56.175 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:56.175 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:56.175 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:56.175 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:56.175 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:20:56.175 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:20:56.175 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:20:56.175 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:20:56.175 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:20:56.175 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:20:56.175 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:20:56.175 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:56.175 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:20:56.175 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:20:56.175 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:56.175 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:56.175 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:20:56.175 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:20:56.175 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:56.175 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:20:56.175 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:20:56.175 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:20:56.175 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:20:56.175 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:56.175 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:20:56.175 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:20:56.175 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:56.175 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:56.175 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:20:56.175 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:56.175 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:56.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.175 --rc genhtml_branch_coverage=1 00:20:56.175 --rc genhtml_function_coverage=1 00:20:56.175 --rc genhtml_legend=1 00:20:56.176 --rc geninfo_all_blocks=1 00:20:56.176 --rc geninfo_unexecuted_blocks=1 00:20:56.176 00:20:56.176 ' 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:56.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.176 --rc genhtml_branch_coverage=1 00:20:56.176 --rc genhtml_function_coverage=1 00:20:56.176 --rc genhtml_legend=1 00:20:56.176 --rc geninfo_all_blocks=1 00:20:56.176 --rc geninfo_unexecuted_blocks=1 00:20:56.176 00:20:56.176 ' 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:56.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.176 --rc genhtml_branch_coverage=1 00:20:56.176 --rc genhtml_function_coverage=1 00:20:56.176 --rc genhtml_legend=1 00:20:56.176 --rc geninfo_all_blocks=1 00:20:56.176 --rc geninfo_unexecuted_blocks=1 00:20:56.176 00:20:56.176 ' 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:56.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.176 --rc genhtml_branch_coverage=1 00:20:56.176 --rc genhtml_function_coverage=1 00:20:56.176 --rc genhtml_legend=1 00:20:56.176 --rc geninfo_all_blocks=1 00:20:56.176 --rc geninfo_unexecuted_blocks=1 00:20:56.176 00:20:56.176 ' 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:56.176 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:56.176 Cannot find device "nvmf_init_br" 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:20:56.176 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:56.176 Cannot find device "nvmf_init_br2" 00:20:56.177 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:20:56.177 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:56.177 Cannot find device "nvmf_tgt_br" 00:20:56.177 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:20:56.177 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:56.177 Cannot find device "nvmf_tgt_br2" 00:20:56.177 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:20:56.177 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:56.177 Cannot find device "nvmf_init_br" 00:20:56.177 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:20:56.177 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:56.177 Cannot find device "nvmf_init_br2" 00:20:56.177 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:20:56.177 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:56.435 Cannot find device "nvmf_tgt_br" 00:20:56.435 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:20:56.435 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:56.435 Cannot find device "nvmf_tgt_br2" 00:20:56.435 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:20:56.435 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:56.435 Cannot find device "nvmf_br" 00:20:56.435 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:20:56.435 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:56.435 Cannot find device "nvmf_init_if" 00:20:56.435 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:20:56.435 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:56.435 Cannot find device "nvmf_init_if2" 00:20:56.435 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:20:56.435 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:56.435 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:56.435 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:20:56.435 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:56.435 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:56.435 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:20:56.435 09:31:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:56.435 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:56.435 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:56.435 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:56.435 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:56.435 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:56.435 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:56.435 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:56.435 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:56.435 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:56.435 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:56.435 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:56.435 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:56.435 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:56.435 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:56.436 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:56.436 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:56.436 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:56.436 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:56.436 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:56.694 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:56.694 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:56.694 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:56.694 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:56.694 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:56.694 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:56.694 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:56.694 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:56.694 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:56.694 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:56.694 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:56.694 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:56.694 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:56.694 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:56.694 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:20:56.694 00:20:56.694 --- 10.0.0.3 ping statistics --- 00:20:56.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.694 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:20:56.694 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:56.694 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:56.694 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:20:56.694 00:20:56.694 --- 10.0.0.4 ping statistics --- 00:20:56.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.694 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:20:56.694 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:56.694 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:56.694 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:20:56.694 00:20:56.694 --- 10.0.0.1 ping statistics --- 00:20:56.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.694 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:20:56.694 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:56.694 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:56.694 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:20:56.694 00:20:56.694 --- 10.0.0.2 ping statistics --- 00:20:56.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.694 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:20:56.694 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:56.694 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:20:56.694 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:56.694 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:56.694 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:56.694 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:56.694 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:56.694 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:56.694 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:56.694 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:20:56.695 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:20:56.695 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:20:56.695 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:56.695 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:56.695 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:56.695 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:56.695 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:56.695 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:56.695 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:56.695 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:56.695 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:56.695 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:20:56.695 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:20:56.695 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:20:56.695 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:20:56.695 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:56.695 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:56.695 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:56.695 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:20:56.695 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:20:56.695 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:20:56.695 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:56.695 09:31:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:57.261 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:57.261 Waiting for block devices as requested 00:20:57.261 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:57.588 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:57.588 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:57.588 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:57.588 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:20:57.588 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:20:57.588 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:57.588 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:57.588 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:20:57.588 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:20:57.588 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:57.588 No valid GPT data, bailing 00:20:57.588 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:57.588 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:57.588 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:57.588 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:20:57.588 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:57.588 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:20:57.588 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:20:57.588 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:20:57.588 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:57.588 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:57.588 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:20:57.588 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:20:57.588 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:20:57.588 No valid GPT data, bailing 00:20:57.588 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:20:57.588 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:57.588 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:57.588 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:20:57.588 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:57.588 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:20:57.588 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:20:57.588 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:20:57.588 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:57.588 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:57.588 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:20:57.588 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:20:57.588 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:20:57.845 No valid GPT data, bailing 00:20:57.845 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:20:57.845 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:57.845 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:57.845 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:20:57.845 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:57.845 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:57.845 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:20:57.845 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:20:57.845 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:57.845 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:57.845 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:20:57.845 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:20:57.845 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:57.845 No valid GPT data, bailing 00:20:57.845 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:57.845 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:57.845 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:57.845 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:20:57.845 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:20:57.845 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:57.845 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:57.845 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:57.845 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:20:57.845 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:20:57.845 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:20:57.845 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:20:57.845 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:20:57.845 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:20:57.845 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:20:57.845 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:20:57.845 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:57.845 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid=e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -a 10.0.0.1 -t tcp -s 4420 00:20:57.845 00:20:57.845 Discovery Log Number of Records 2, Generation counter 2 00:20:57.845 =====Discovery Log Entry 0====== 00:20:57.845 trtype: tcp 00:20:57.845 adrfam: ipv4 00:20:57.845 subtype: current discovery subsystem 00:20:57.845 treq: not specified, sq flow control disable supported 00:20:57.845 portid: 1 00:20:57.845 trsvcid: 4420 00:20:57.845 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:57.845 traddr: 10.0.0.1 00:20:57.845 eflags: none 00:20:57.845 sectype: none 00:20:57.845 =====Discovery Log Entry 1====== 00:20:57.845 trtype: tcp 00:20:57.845 adrfam: ipv4 00:20:57.845 subtype: nvme subsystem 00:20:57.845 treq: not specified, sq flow control disable supported 00:20:57.845 portid: 1 00:20:57.845 trsvcid: 4420 00:20:57.845 subnqn: nqn.2016-06.io.spdk:testnqn 00:20:57.845 traddr: 10.0.0.1 00:20:57.845 eflags: none 00:20:57.845 sectype: none 00:20:57.845 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:20:57.845 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:20:58.104 ===================================================== 00:20:58.104 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:58.104 ===================================================== 00:20:58.104 Controller Capabilities/Features 00:20:58.104 ================================ 00:20:58.104 Vendor ID: 0000 00:20:58.104 Subsystem Vendor ID: 0000 00:20:58.104 Serial Number: 6ff95665f2f5e5f814c2 00:20:58.104 Model Number: Linux 00:20:58.104 Firmware Version: 6.8.9-20 00:20:58.104 Recommended Arb Burst: 0 00:20:58.104 IEEE OUI Identifier: 00 00 00 00:20:58.104 Multi-path I/O 00:20:58.104 May have multiple subsystem ports: No 00:20:58.104 May have multiple controllers: No 00:20:58.104 Associated with SR-IOV VF: No 00:20:58.104 Max Data Transfer Size: Unlimited 00:20:58.104 Max Number of Namespaces: 0 00:20:58.104 Max Number of I/O Queues: 1024 00:20:58.104 NVMe Specification Version (VS): 1.3 00:20:58.104 NVMe Specification Version (Identify): 1.3 00:20:58.104 Maximum Queue Entries: 1024 00:20:58.104 Contiguous Queues Required: No 00:20:58.104 Arbitration Mechanisms Supported 00:20:58.104 Weighted Round Robin: Not Supported 00:20:58.104 Vendor Specific: Not Supported 00:20:58.104 Reset Timeout: 7500 ms 00:20:58.104 Doorbell Stride: 4 bytes 00:20:58.104 NVM Subsystem Reset: Not Supported 00:20:58.104 Command Sets Supported 00:20:58.104 NVM Command Set: Supported 00:20:58.104 Boot Partition: Not Supported 00:20:58.104 Memory Page Size Minimum: 4096 bytes 00:20:58.104 Memory Page Size Maximum: 4096 bytes 00:20:58.104 Persistent Memory Region: Not Supported 00:20:58.104 Optional Asynchronous Events Supported 00:20:58.104 Namespace Attribute Notices: Not Supported 00:20:58.104 Firmware Activation Notices: Not Supported 00:20:58.104 ANA Change Notices: Not Supported 00:20:58.104 PLE Aggregate Log Change Notices: Not Supported 00:20:58.104 LBA Status Info Alert Notices: Not Supported 00:20:58.104 EGE Aggregate Log Change Notices: Not Supported 00:20:58.104 Normal NVM Subsystem Shutdown event: Not Supported 00:20:58.104 Zone Descriptor Change Notices: Not Supported 00:20:58.104 Discovery Log Change Notices: Supported 00:20:58.104 Controller Attributes 00:20:58.104 128-bit Host Identifier: Not Supported 00:20:58.104 Non-Operational Permissive Mode: Not Supported 00:20:58.104 NVM Sets: Not Supported 00:20:58.104 Read Recovery Levels: Not Supported 00:20:58.104 Endurance Groups: Not Supported 00:20:58.104 Predictable Latency Mode: Not Supported 00:20:58.104 Traffic Based Keep ALive: Not Supported 00:20:58.104 Namespace Granularity: Not Supported 00:20:58.104 SQ Associations: Not Supported 00:20:58.104 UUID List: Not Supported 00:20:58.104 Multi-Domain Subsystem: Not Supported 00:20:58.104 Fixed Capacity Management: Not Supported 00:20:58.104 Variable Capacity Management: Not Supported 00:20:58.104 Delete Endurance Group: Not Supported 00:20:58.104 Delete NVM Set: Not Supported 00:20:58.104 Extended LBA Formats Supported: Not Supported 00:20:58.104 Flexible Data Placement Supported: Not Supported 00:20:58.104 00:20:58.104 Controller Memory Buffer Support 00:20:58.104 ================================ 00:20:58.104 Supported: No 00:20:58.104 00:20:58.104 Persistent Memory Region Support 00:20:58.104 ================================ 00:20:58.104 Supported: No 00:20:58.104 00:20:58.104 Admin Command Set Attributes 00:20:58.104 ============================ 00:20:58.104 Security Send/Receive: Not Supported 00:20:58.104 Format NVM: Not Supported 00:20:58.104 Firmware Activate/Download: Not Supported 00:20:58.104 Namespace Management: Not Supported 00:20:58.104 Device Self-Test: Not Supported 00:20:58.104 Directives: Not Supported 00:20:58.104 NVMe-MI: Not Supported 00:20:58.104 Virtualization Management: Not Supported 00:20:58.104 Doorbell Buffer Config: Not Supported 00:20:58.104 Get LBA Status Capability: Not Supported 00:20:58.104 Command & Feature Lockdown Capability: Not Supported 00:20:58.104 Abort Command Limit: 1 00:20:58.104 Async Event Request Limit: 1 00:20:58.104 Number of Firmware Slots: N/A 00:20:58.104 Firmware Slot 1 Read-Only: N/A 00:20:58.104 Firmware Activation Without Reset: N/A 00:20:58.104 Multiple Update Detection Support: N/A 00:20:58.104 Firmware Update Granularity: No Information Provided 00:20:58.104 Per-Namespace SMART Log: No 00:20:58.104 Asymmetric Namespace Access Log Page: Not Supported 00:20:58.104 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:58.104 Command Effects Log Page: Not Supported 00:20:58.104 Get Log Page Extended Data: Supported 00:20:58.104 Telemetry Log Pages: Not Supported 00:20:58.104 Persistent Event Log Pages: Not Supported 00:20:58.104 Supported Log Pages Log Page: May Support 00:20:58.104 Commands Supported & Effects Log Page: Not Supported 00:20:58.104 Feature Identifiers & Effects Log Page:May Support 00:20:58.104 NVMe-MI Commands & Effects Log Page: May Support 00:20:58.104 Data Area 4 for Telemetry Log: Not Supported 00:20:58.104 Error Log Page Entries Supported: 1 00:20:58.104 Keep Alive: Not Supported 00:20:58.104 00:20:58.104 NVM Command Set Attributes 00:20:58.104 ========================== 00:20:58.104 Submission Queue Entry Size 00:20:58.104 Max: 1 00:20:58.104 Min: 1 00:20:58.104 Completion Queue Entry Size 00:20:58.104 Max: 1 00:20:58.104 Min: 1 00:20:58.104 Number of Namespaces: 0 00:20:58.104 Compare Command: Not Supported 00:20:58.104 Write Uncorrectable Command: Not Supported 00:20:58.104 Dataset Management Command: Not Supported 00:20:58.104 Write Zeroes Command: Not Supported 00:20:58.104 Set Features Save Field: Not Supported 00:20:58.104 Reservations: Not Supported 00:20:58.104 Timestamp: Not Supported 00:20:58.104 Copy: Not Supported 00:20:58.104 Volatile Write Cache: Not Present 00:20:58.104 Atomic Write Unit (Normal): 1 00:20:58.104 Atomic Write Unit (PFail): 1 00:20:58.104 Atomic Compare & Write Unit: 1 00:20:58.104 Fused Compare & Write: Not Supported 00:20:58.104 Scatter-Gather List 00:20:58.104 SGL Command Set: Supported 00:20:58.104 SGL Keyed: Not Supported 00:20:58.104 SGL Bit Bucket Descriptor: Not Supported 00:20:58.104 SGL Metadata Pointer: Not Supported 00:20:58.104 Oversized SGL: Not Supported 00:20:58.104 SGL Metadata Address: Not Supported 00:20:58.104 SGL Offset: Supported 00:20:58.104 Transport SGL Data Block: Not Supported 00:20:58.104 Replay Protected Memory Block: Not Supported 00:20:58.104 00:20:58.104 Firmware Slot Information 00:20:58.104 ========================= 00:20:58.104 Active slot: 0 00:20:58.104 00:20:58.104 00:20:58.104 Error Log 00:20:58.104 ========= 00:20:58.104 00:20:58.104 Active Namespaces 00:20:58.104 ================= 00:20:58.104 Discovery Log Page 00:20:58.104 ================== 00:20:58.104 Generation Counter: 2 00:20:58.104 Number of Records: 2 00:20:58.104 Record Format: 0 00:20:58.104 00:20:58.104 Discovery Log Entry 0 00:20:58.104 ---------------------- 00:20:58.104 Transport Type: 3 (TCP) 00:20:58.104 Address Family: 1 (IPv4) 00:20:58.104 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:58.104 Entry Flags: 00:20:58.104 Duplicate Returned Information: 0 00:20:58.104 Explicit Persistent Connection Support for Discovery: 0 00:20:58.104 Transport Requirements: 00:20:58.104 Secure Channel: Not Specified 00:20:58.104 Port ID: 1 (0x0001) 00:20:58.104 Controller ID: 65535 (0xffff) 00:20:58.104 Admin Max SQ Size: 32 00:20:58.104 Transport Service Identifier: 4420 00:20:58.104 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:58.104 Transport Address: 10.0.0.1 00:20:58.104 Discovery Log Entry 1 00:20:58.104 ---------------------- 00:20:58.104 Transport Type: 3 (TCP) 00:20:58.104 Address Family: 1 (IPv4) 00:20:58.104 Subsystem Type: 2 (NVM Subsystem) 00:20:58.104 Entry Flags: 00:20:58.104 Duplicate Returned Information: 0 00:20:58.104 Explicit Persistent Connection Support for Discovery: 0 00:20:58.104 Transport Requirements: 00:20:58.104 Secure Channel: Not Specified 00:20:58.104 Port ID: 1 (0x0001) 00:20:58.104 Controller ID: 65535 (0xffff) 00:20:58.104 Admin Max SQ Size: 32 00:20:58.104 Transport Service Identifier: 4420 00:20:58.104 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:20:58.104 Transport Address: 10.0.0.1 00:20:58.104 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:58.364 get_feature(0x01) failed 00:20:58.364 get_feature(0x02) failed 00:20:58.364 get_feature(0x04) failed 00:20:58.364 ===================================================== 00:20:58.364 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:58.364 ===================================================== 00:20:58.364 Controller Capabilities/Features 00:20:58.364 ================================ 00:20:58.364 Vendor ID: 0000 00:20:58.364 Subsystem Vendor ID: 0000 00:20:58.364 Serial Number: 395a08a01517c6039ad2 00:20:58.364 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:20:58.364 Firmware Version: 6.8.9-20 00:20:58.364 Recommended Arb Burst: 6 00:20:58.364 IEEE OUI Identifier: 00 00 00 00:20:58.364 Multi-path I/O 00:20:58.364 May have multiple subsystem ports: Yes 00:20:58.364 May have multiple controllers: Yes 00:20:58.364 Associated with SR-IOV VF: No 00:20:58.364 Max Data Transfer Size: Unlimited 00:20:58.364 Max Number of Namespaces: 1024 00:20:58.364 Max Number of I/O Queues: 128 00:20:58.364 NVMe Specification Version (VS): 1.3 00:20:58.364 NVMe Specification Version (Identify): 1.3 00:20:58.364 Maximum Queue Entries: 1024 00:20:58.364 Contiguous Queues Required: No 00:20:58.364 Arbitration Mechanisms Supported 00:20:58.364 Weighted Round Robin: Not Supported 00:20:58.364 Vendor Specific: Not Supported 00:20:58.364 Reset Timeout: 7500 ms 00:20:58.364 Doorbell Stride: 4 bytes 00:20:58.364 NVM Subsystem Reset: Not Supported 00:20:58.364 Command Sets Supported 00:20:58.364 NVM Command Set: Supported 00:20:58.364 Boot Partition: Not Supported 00:20:58.364 Memory Page Size Minimum: 4096 bytes 00:20:58.364 Memory Page Size Maximum: 4096 bytes 00:20:58.364 Persistent Memory Region: Not Supported 00:20:58.364 Optional Asynchronous Events Supported 00:20:58.364 Namespace Attribute Notices: Supported 00:20:58.364 Firmware Activation Notices: Not Supported 00:20:58.364 ANA Change Notices: Supported 00:20:58.364 PLE Aggregate Log Change Notices: Not Supported 00:20:58.364 LBA Status Info Alert Notices: Not Supported 00:20:58.364 EGE Aggregate Log Change Notices: Not Supported 00:20:58.364 Normal NVM Subsystem Shutdown event: Not Supported 00:20:58.364 Zone Descriptor Change Notices: Not Supported 00:20:58.364 Discovery Log Change Notices: Not Supported 00:20:58.364 Controller Attributes 00:20:58.364 128-bit Host Identifier: Supported 00:20:58.364 Non-Operational Permissive Mode: Not Supported 00:20:58.364 NVM Sets: Not Supported 00:20:58.364 Read Recovery Levels: Not Supported 00:20:58.364 Endurance Groups: Not Supported 00:20:58.364 Predictable Latency Mode: Not Supported 00:20:58.364 Traffic Based Keep ALive: Supported 00:20:58.364 Namespace Granularity: Not Supported 00:20:58.364 SQ Associations: Not Supported 00:20:58.364 UUID List: Not Supported 00:20:58.364 Multi-Domain Subsystem: Not Supported 00:20:58.364 Fixed Capacity Management: Not Supported 00:20:58.364 Variable Capacity Management: Not Supported 00:20:58.364 Delete Endurance Group: Not Supported 00:20:58.364 Delete NVM Set: Not Supported 00:20:58.364 Extended LBA Formats Supported: Not Supported 00:20:58.364 Flexible Data Placement Supported: Not Supported 00:20:58.364 00:20:58.364 Controller Memory Buffer Support 00:20:58.364 ================================ 00:20:58.364 Supported: No 00:20:58.364 00:20:58.364 Persistent Memory Region Support 00:20:58.364 ================================ 00:20:58.364 Supported: No 00:20:58.364 00:20:58.364 Admin Command Set Attributes 00:20:58.364 ============================ 00:20:58.364 Security Send/Receive: Not Supported 00:20:58.364 Format NVM: Not Supported 00:20:58.364 Firmware Activate/Download: Not Supported 00:20:58.364 Namespace Management: Not Supported 00:20:58.364 Device Self-Test: Not Supported 00:20:58.364 Directives: Not Supported 00:20:58.364 NVMe-MI: Not Supported 00:20:58.364 Virtualization Management: Not Supported 00:20:58.364 Doorbell Buffer Config: Not Supported 00:20:58.364 Get LBA Status Capability: Not Supported 00:20:58.364 Command & Feature Lockdown Capability: Not Supported 00:20:58.364 Abort Command Limit: 4 00:20:58.364 Async Event Request Limit: 4 00:20:58.364 Number of Firmware Slots: N/A 00:20:58.364 Firmware Slot 1 Read-Only: N/A 00:20:58.364 Firmware Activation Without Reset: N/A 00:20:58.364 Multiple Update Detection Support: N/A 00:20:58.364 Firmware Update Granularity: No Information Provided 00:20:58.364 Per-Namespace SMART Log: Yes 00:20:58.364 Asymmetric Namespace Access Log Page: Supported 00:20:58.364 ANA Transition Time : 10 sec 00:20:58.364 00:20:58.364 Asymmetric Namespace Access Capabilities 00:20:58.364 ANA Optimized State : Supported 00:20:58.364 ANA Non-Optimized State : Supported 00:20:58.364 ANA Inaccessible State : Supported 00:20:58.364 ANA Persistent Loss State : Supported 00:20:58.364 ANA Change State : Supported 00:20:58.364 ANAGRPID is not changed : No 00:20:58.364 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:20:58.364 00:20:58.364 ANA Group Identifier Maximum : 128 00:20:58.364 Number of ANA Group Identifiers : 128 00:20:58.364 Max Number of Allowed Namespaces : 1024 00:20:58.364 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:20:58.364 Command Effects Log Page: Supported 00:20:58.364 Get Log Page Extended Data: Supported 00:20:58.364 Telemetry Log Pages: Not Supported 00:20:58.364 Persistent Event Log Pages: Not Supported 00:20:58.364 Supported Log Pages Log Page: May Support 00:20:58.364 Commands Supported & Effects Log Page: Not Supported 00:20:58.364 Feature Identifiers & Effects Log Page:May Support 00:20:58.364 NVMe-MI Commands & Effects Log Page: May Support 00:20:58.364 Data Area 4 for Telemetry Log: Not Supported 00:20:58.364 Error Log Page Entries Supported: 128 00:20:58.364 Keep Alive: Supported 00:20:58.364 Keep Alive Granularity: 1000 ms 00:20:58.364 00:20:58.364 NVM Command Set Attributes 00:20:58.364 ========================== 00:20:58.364 Submission Queue Entry Size 00:20:58.364 Max: 64 00:20:58.364 Min: 64 00:20:58.364 Completion Queue Entry Size 00:20:58.364 Max: 16 00:20:58.364 Min: 16 00:20:58.364 Number of Namespaces: 1024 00:20:58.364 Compare Command: Not Supported 00:20:58.364 Write Uncorrectable Command: Not Supported 00:20:58.364 Dataset Management Command: Supported 00:20:58.364 Write Zeroes Command: Supported 00:20:58.364 Set Features Save Field: Not Supported 00:20:58.364 Reservations: Not Supported 00:20:58.364 Timestamp: Not Supported 00:20:58.364 Copy: Not Supported 00:20:58.364 Volatile Write Cache: Present 00:20:58.364 Atomic Write Unit (Normal): 1 00:20:58.364 Atomic Write Unit (PFail): 1 00:20:58.364 Atomic Compare & Write Unit: 1 00:20:58.364 Fused Compare & Write: Not Supported 00:20:58.364 Scatter-Gather List 00:20:58.364 SGL Command Set: Supported 00:20:58.364 SGL Keyed: Not Supported 00:20:58.364 SGL Bit Bucket Descriptor: Not Supported 00:20:58.364 SGL Metadata Pointer: Not Supported 00:20:58.364 Oversized SGL: Not Supported 00:20:58.364 SGL Metadata Address: Not Supported 00:20:58.364 SGL Offset: Supported 00:20:58.364 Transport SGL Data Block: Not Supported 00:20:58.364 Replay Protected Memory Block: Not Supported 00:20:58.364 00:20:58.365 Firmware Slot Information 00:20:58.365 ========================= 00:20:58.365 Active slot: 0 00:20:58.365 00:20:58.365 Asymmetric Namespace Access 00:20:58.365 =========================== 00:20:58.365 Change Count : 0 00:20:58.365 Number of ANA Group Descriptors : 1 00:20:58.365 ANA Group Descriptor : 0 00:20:58.365 ANA Group ID : 1 00:20:58.365 Number of NSID Values : 1 00:20:58.365 Change Count : 0 00:20:58.365 ANA State : 1 00:20:58.365 Namespace Identifier : 1 00:20:58.365 00:20:58.365 Commands Supported and Effects 00:20:58.365 ============================== 00:20:58.365 Admin Commands 00:20:58.365 -------------- 00:20:58.365 Get Log Page (02h): Supported 00:20:58.365 Identify (06h): Supported 00:20:58.365 Abort (08h): Supported 00:20:58.365 Set Features (09h): Supported 00:20:58.365 Get Features (0Ah): Supported 00:20:58.365 Asynchronous Event Request (0Ch): Supported 00:20:58.365 Keep Alive (18h): Supported 00:20:58.365 I/O Commands 00:20:58.365 ------------ 00:20:58.365 Flush (00h): Supported 00:20:58.365 Write (01h): Supported LBA-Change 00:20:58.365 Read (02h): Supported 00:20:58.365 Write Zeroes (08h): Supported LBA-Change 00:20:58.365 Dataset Management (09h): Supported 00:20:58.365 00:20:58.365 Error Log 00:20:58.365 ========= 00:20:58.365 Entry: 0 00:20:58.365 Error Count: 0x3 00:20:58.365 Submission Queue Id: 0x0 00:20:58.365 Command Id: 0x5 00:20:58.365 Phase Bit: 0 00:20:58.365 Status Code: 0x2 00:20:58.365 Status Code Type: 0x0 00:20:58.365 Do Not Retry: 1 00:20:58.365 Error Location: 0x28 00:20:58.365 LBA: 0x0 00:20:58.365 Namespace: 0x0 00:20:58.365 Vendor Log Page: 0x0 00:20:58.365 ----------- 00:20:58.365 Entry: 1 00:20:58.365 Error Count: 0x2 00:20:58.365 Submission Queue Id: 0x0 00:20:58.365 Command Id: 0x5 00:20:58.365 Phase Bit: 0 00:20:58.365 Status Code: 0x2 00:20:58.365 Status Code Type: 0x0 00:20:58.365 Do Not Retry: 1 00:20:58.365 Error Location: 0x28 00:20:58.365 LBA: 0x0 00:20:58.365 Namespace: 0x0 00:20:58.365 Vendor Log Page: 0x0 00:20:58.365 ----------- 00:20:58.365 Entry: 2 00:20:58.365 Error Count: 0x1 00:20:58.365 Submission Queue Id: 0x0 00:20:58.365 Command Id: 0x4 00:20:58.365 Phase Bit: 0 00:20:58.365 Status Code: 0x2 00:20:58.365 Status Code Type: 0x0 00:20:58.365 Do Not Retry: 1 00:20:58.365 Error Location: 0x28 00:20:58.365 LBA: 0x0 00:20:58.365 Namespace: 0x0 00:20:58.365 Vendor Log Page: 0x0 00:20:58.365 00:20:58.365 Number of Queues 00:20:58.365 ================ 00:20:58.365 Number of I/O Submission Queues: 128 00:20:58.365 Number of I/O Completion Queues: 128 00:20:58.365 00:20:58.365 ZNS Specific Controller Data 00:20:58.365 ============================ 00:20:58.365 Zone Append Size Limit: 0 00:20:58.365 00:20:58.365 00:20:58.365 Active Namespaces 00:20:58.365 ================= 00:20:58.365 get_feature(0x05) failed 00:20:58.365 Namespace ID:1 00:20:58.365 Command Set Identifier: NVM (00h) 00:20:58.365 Deallocate: Supported 00:20:58.365 Deallocated/Unwritten Error: Not Supported 00:20:58.365 Deallocated Read Value: Unknown 00:20:58.365 Deallocate in Write Zeroes: Not Supported 00:20:58.365 Deallocated Guard Field: 0xFFFF 00:20:58.365 Flush: Supported 00:20:58.365 Reservation: Not Supported 00:20:58.365 Namespace Sharing Capabilities: Multiple Controllers 00:20:58.365 Size (in LBAs): 1310720 (5GiB) 00:20:58.365 Capacity (in LBAs): 1310720 (5GiB) 00:20:58.365 Utilization (in LBAs): 1310720 (5GiB) 00:20:58.365 UUID: 4cf67eb5-1524-4d6b-a139-0ee0103803d4 00:20:58.365 Thin Provisioning: Not Supported 00:20:58.365 Per-NS Atomic Units: Yes 00:20:58.365 Atomic Boundary Size (Normal): 0 00:20:58.365 Atomic Boundary Size (PFail): 0 00:20:58.365 Atomic Boundary Offset: 0 00:20:58.365 NGUID/EUI64 Never Reused: No 00:20:58.365 ANA group ID: 1 00:20:58.365 Namespace Write Protected: No 00:20:58.365 Number of LBA Formats: 1 00:20:58.365 Current LBA Format: LBA Format #00 00:20:58.365 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:20:58.365 00:20:58.365 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:20:58.365 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:58.365 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:20:58.365 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:58.365 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:20:58.365 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:58.365 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:58.365 rmmod nvme_tcp 00:20:58.365 rmmod nvme_fabrics 00:20:58.365 09:31:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:58.365 09:31:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:20:58.365 09:31:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:20:58.365 09:31:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:20:58.365 09:31:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:58.365 09:31:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:58.365 09:31:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:58.365 09:31:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:20:58.365 09:31:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:20:58.365 09:31:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:58.365 09:31:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:20:58.365 09:31:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:58.365 09:31:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:58.365 09:31:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:58.365 09:31:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:58.365 09:31:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:58.624 09:31:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:58.624 09:31:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:58.624 09:31:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:58.624 09:31:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:58.624 09:31:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:58.624 09:31:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:58.624 09:31:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:58.624 09:31:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:58.624 09:31:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:58.624 09:31:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:58.624 09:31:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:58.624 09:31:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:58.624 09:31:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:58.624 09:31:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:58.624 09:31:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:20:58.624 09:31:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:20:58.624 09:31:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:20:58.624 09:31:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:20:58.624 09:31:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:58.884 09:31:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:58.884 09:31:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:58.884 09:31:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:58.884 09:31:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:20:58.884 09:31:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:20:58.884 09:31:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:59.821 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:59.821 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:59.821 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:59.821 00:20:59.821 real 0m3.950s 00:20:59.821 user 0m1.347s 00:20:59.821 sys 0m1.917s 00:20:59.821 09:31:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:59.821 09:31:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.821 ************************************ 00:20:59.822 END TEST nvmf_identify_kernel_target 00:20:59.822 ************************************ 00:20:59.822 09:31:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:20:59.822 09:31:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:59.822 09:31:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:59.822 09:31:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.822 ************************************ 00:20:59.822 START TEST nvmf_auth_host 00:20:59.822 ************************************ 00:20:59.822 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:21:00.081 * Looking for test storage... 00:21:00.081 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:00.081 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:00.081 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:21:00.081 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:00.081 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:00.081 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:00.081 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:00.081 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:00.081 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:00.081 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:00.081 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:00.081 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:00.081 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:00.081 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:00.081 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:00.081 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:00.081 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:21:00.081 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:21:00.081 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:00.081 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:00.081 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:21:00.081 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:21:00.081 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:00.081 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:21:00.081 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:00.081 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:21:00.081 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:21:00.081 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:00.081 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:21:00.081 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:00.081 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:00.081 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:00.081 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:21:00.081 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:00.081 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:00.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.081 --rc genhtml_branch_coverage=1 00:21:00.081 --rc genhtml_function_coverage=1 00:21:00.081 --rc genhtml_legend=1 00:21:00.081 --rc geninfo_all_blocks=1 00:21:00.081 --rc geninfo_unexecuted_blocks=1 00:21:00.081 00:21:00.081 ' 00:21:00.081 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:00.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.081 --rc genhtml_branch_coverage=1 00:21:00.081 --rc genhtml_function_coverage=1 00:21:00.081 --rc genhtml_legend=1 00:21:00.081 --rc geninfo_all_blocks=1 00:21:00.081 --rc geninfo_unexecuted_blocks=1 00:21:00.081 00:21:00.081 ' 00:21:00.081 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:00.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.081 --rc genhtml_branch_coverage=1 00:21:00.081 --rc genhtml_function_coverage=1 00:21:00.081 --rc genhtml_legend=1 00:21:00.081 --rc geninfo_all_blocks=1 00:21:00.081 --rc geninfo_unexecuted_blocks=1 00:21:00.081 00:21:00.081 ' 00:21:00.081 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:00.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.081 --rc genhtml_branch_coverage=1 00:21:00.081 --rc genhtml_function_coverage=1 00:21:00.081 --rc genhtml_legend=1 00:21:00.081 --rc geninfo_all_blocks=1 00:21:00.081 --rc geninfo_unexecuted_blocks=1 00:21:00.081 00:21:00.081 ' 00:21:00.081 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:00.081 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:21:00.081 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:00.081 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:00.081 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:00.081 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:00.081 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:00.082 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:00.082 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:00.082 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:00.082 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:00.082 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:00.082 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:21:00.082 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:21:00.082 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:00.082 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:00.082 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:00.082 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:00.082 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:00.082 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:00.082 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:00.082 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:00.082 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:00.082 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.082 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.082 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.082 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:21:00.082 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.082 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:21:00.082 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:00.082 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:00.082 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:00.082 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:00.082 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:00.082 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:00.082 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:00.082 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:00.082 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:00.082 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:00.341 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:21:00.341 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:21:00.341 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:21:00.341 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:21:00.341 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:00.341 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:21:00.341 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:21:00.341 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:21:00.341 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:21:00.341 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:00.341 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:00.341 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:00.341 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:00.341 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:00.341 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:00.341 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:00.342 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.342 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:00.342 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:00.342 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:00.342 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:00.342 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:00.342 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:00.342 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:00.342 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:00.342 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:00.342 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:00.342 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:00.342 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:00.342 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:00.342 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:00.342 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:00.342 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:00.342 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:00.342 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:00.342 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:00.342 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:00.342 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:00.342 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:00.342 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:00.342 Cannot find device "nvmf_init_br" 00:21:00.342 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:21:00.342 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:00.342 Cannot find device "nvmf_init_br2" 00:21:00.342 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:21:00.342 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:00.342 Cannot find device "nvmf_tgt_br" 00:21:00.342 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:21:00.342 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:00.342 Cannot find device "nvmf_tgt_br2" 00:21:00.342 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:21:00.342 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:00.342 Cannot find device "nvmf_init_br" 00:21:00.342 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:21:00.342 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:00.342 Cannot find device "nvmf_init_br2" 00:21:00.342 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:21:00.342 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:00.342 Cannot find device "nvmf_tgt_br" 00:21:00.342 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:21:00.342 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:00.342 Cannot find device "nvmf_tgt_br2" 00:21:00.342 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:21:00.342 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:00.342 Cannot find device "nvmf_br" 00:21:00.342 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:21:00.342 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:00.342 Cannot find device "nvmf_init_if" 00:21:00.342 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:21:00.342 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:00.342 Cannot find device "nvmf_init_if2" 00:21:00.342 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:21:00.342 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:00.342 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:00.342 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:21:00.342 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:00.342 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:00.342 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:21:00.342 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:00.342 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:00.342 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:00.601 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:00.601 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:00.601 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:00.601 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:00.601 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:00.601 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:00.601 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:00.601 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:00.601 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:00.601 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:00.601 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:00.601 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:00.601 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:00.601 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:00.601 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:00.602 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:00.602 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:00.602 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:00.602 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:00.602 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:00.602 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:00.602 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:00.602 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:00.602 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:00.602 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:00.602 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:00.602 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:00.861 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:00.861 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:00.861 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:00.861 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:00.861 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:21:00.861 00:21:00.861 --- 10.0.0.3 ping statistics --- 00:21:00.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.861 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:21:00.861 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:00.861 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:00.861 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.111 ms 00:21:00.861 00:21:00.861 --- 10.0.0.4 ping statistics --- 00:21:00.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.861 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:21:00.861 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:00.861 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:00.861 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:21:00.861 00:21:00.861 --- 10.0.0.1 ping statistics --- 00:21:00.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.861 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:21:00.861 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:00.861 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:00.861 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:21:00.861 00:21:00.861 --- 10.0.0.2 ping statistics --- 00:21:00.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.861 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:21:00.861 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:00.861 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:21:00.861 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:00.861 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:00.861 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:00.861 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:00.861 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:00.861 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:00.861 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:00.861 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:21:00.861 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:00.861 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:00.861 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.861 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=78046 00:21:00.861 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:21:00.861 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 78046 00:21:00.861 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 78046 ']' 00:21:00.861 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:00.861 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:00.861 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:00.861 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:00.861 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.798 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:01.798 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:21:01.798 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:01.798 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:01.798 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.798 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:01.798 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:21:01.798 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:21:01.798 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:01.798 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:01.799 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:01.799 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:21:01.799 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:21:01.799 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:01.799 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6898fc6d4c1fe781f2c0004466068f08 00:21:01.799 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:21:01.799 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.o3M 00:21:01.799 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6898fc6d4c1fe781f2c0004466068f08 0 00:21:01.799 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6898fc6d4c1fe781f2c0004466068f08 0 00:21:01.799 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:01.799 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:01.799 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6898fc6d4c1fe781f2c0004466068f08 00:21:01.799 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:21:01.799 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:01.799 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.o3M 00:21:01.799 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.o3M 00:21:01.799 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.o3M 00:21:01.799 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:21:01.799 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:01.799 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:01.799 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:01.799 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:21:01.799 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:21:01.799 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:01.799 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=fa5e5aa9d45a992bb223926d7bbd8ab1487c0e8b096800cb80ec4ecd3f572318 00:21:01.799 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:21:01.799 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.7Nr 00:21:01.799 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key fa5e5aa9d45a992bb223926d7bbd8ab1487c0e8b096800cb80ec4ecd3f572318 3 00:21:01.799 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 fa5e5aa9d45a992bb223926d7bbd8ab1487c0e8b096800cb80ec4ecd3f572318 3 00:21:01.799 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:01.799 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:01.799 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=fa5e5aa9d45a992bb223926d7bbd8ab1487c0e8b096800cb80ec4ecd3f572318 00:21:01.799 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:21:01.799 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.7Nr 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.7Nr 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.7Nr 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b0d81846807a33f2c9cc94f9a70e4354f483ad353b9d76cb 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.3ZT 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b0d81846807a33f2c9cc94f9a70e4354f483ad353b9d76cb 0 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b0d81846807a33f2c9cc94f9a70e4354f483ad353b9d76cb 0 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b0d81846807a33f2c9cc94f9a70e4354f483ad353b9d76cb 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.3ZT 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.3ZT 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.3ZT 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=51d55af40ec9ff88567121ff529495389b94badfb8495115 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Bg8 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 51d55af40ec9ff88567121ff529495389b94badfb8495115 2 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 51d55af40ec9ff88567121ff529495389b94badfb8495115 2 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=51d55af40ec9ff88567121ff529495389b94badfb8495115 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Bg8 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Bg8 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Bg8 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e4f9609ca74bc8c9cdd4aafe407f813c 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.ZKl 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e4f9609ca74bc8c9cdd4aafe407f813c 1 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e4f9609ca74bc8c9cdd4aafe407f813c 1 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e4f9609ca74bc8c9cdd4aafe407f813c 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:21:02.058 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:02.318 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.ZKl 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.ZKl 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.ZKl 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=49df4ecc2609e4f407f6738be306d3b1 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.esX 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 49df4ecc2609e4f407f6738be306d3b1 1 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 49df4ecc2609e4f407f6738be306d3b1 1 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=49df4ecc2609e4f407f6738be306d3b1 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.esX 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.esX 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.esX 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=133f3f6eaab3ab55b26de2f8db8245eaf169a3c4e662b444 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.urz 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 133f3f6eaab3ab55b26de2f8db8245eaf169a3c4e662b444 2 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 133f3f6eaab3ab55b26de2f8db8245eaf169a3c4e662b444 2 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=133f3f6eaab3ab55b26de2f8db8245eaf169a3c4e662b444 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.urz 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.urz 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.urz 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6ce70f94b706f843c2760b7e2678d13b 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Sqq 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6ce70f94b706f843c2760b7e2678d13b 0 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6ce70f94b706f843c2760b7e2678d13b 0 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6ce70f94b706f843c2760b7e2678d13b 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:21:02.319 09:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:02.319 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Sqq 00:21:02.319 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Sqq 00:21:02.319 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Sqq 00:21:02.319 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:21:02.319 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:02.319 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:02.319 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:02.319 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:21:02.319 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:21:02.319 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:02.319 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2b6e20ff05fe65272efed2e22d8437a715dae23a9f65aa702e3396eedb7a231c 00:21:02.578 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:21:02.578 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Z82 00:21:02.578 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2b6e20ff05fe65272efed2e22d8437a715dae23a9f65aa702e3396eedb7a231c 3 00:21:02.578 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2b6e20ff05fe65272efed2e22d8437a715dae23a9f65aa702e3396eedb7a231c 3 00:21:02.578 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:02.578 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:02.578 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2b6e20ff05fe65272efed2e22d8437a715dae23a9f65aa702e3396eedb7a231c 00:21:02.578 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:21:02.578 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:02.578 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Z82 00:21:02.578 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Z82 00:21:02.578 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Z82 00:21:02.578 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:21:02.578 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 78046 00:21:02.578 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 78046 ']' 00:21:02.578 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:02.578 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:02.578 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:02.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:02.578 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:02.578 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.o3M 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.7Nr ]] 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.7Nr 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.3ZT 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Bg8 ]] 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Bg8 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.ZKl 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.esX ]] 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.esX 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.urz 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Sqq ]] 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Sqq 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Z82 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:21:02.838 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:02.839 09:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:03.405 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:03.405 Waiting for block devices as requested 00:21:03.405 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:03.662 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:04.597 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:04.597 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:04.597 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:21:04.597 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:21:04.597 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:04.597 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:04.597 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:21:04.597 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:21:04.597 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:21:04.597 No valid GPT data, bailing 00:21:04.597 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:04.597 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:21:04.597 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:21:04.597 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:21:04.597 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:04.597 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:21:04.597 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:21:04.597 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:21:04.597 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:21:04.597 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:04.597 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:21:04.597 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:21:04.597 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:21:04.597 No valid GPT data, bailing 00:21:04.597 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:21:04.597 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:21:04.597 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:21:04.597 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:21:04.597 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:04.597 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:21:04.597 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:21:04.597 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:21:04.597 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:21:04.597 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:04.597 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:21:04.597 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:21:04.597 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:21:04.597 No valid GPT data, bailing 00:21:04.597 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:21:04.597 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:21:04.597 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:21:04.597 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:21:04.597 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:04.598 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:21:04.598 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:21:04.598 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:21:04.598 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:21:04.598 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:04.598 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:21:04.598 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:21:04.598 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:21:04.598 No valid GPT data, bailing 00:21:04.598 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:21:04.598 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:21:04.598 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:21:04.598 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:21:04.598 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:21:04.598 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:04.598 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:21:04.857 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:04.857 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:21:04.857 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:21:04.857 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:21:04.857 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:21:04.857 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:21:04.857 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:21:04.857 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:21:04.857 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:21:04.857 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:04.857 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid=e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -a 10.0.0.1 -t tcp -s 4420 00:21:04.857 00:21:04.857 Discovery Log Number of Records 2, Generation counter 2 00:21:04.857 =====Discovery Log Entry 0====== 00:21:04.857 trtype: tcp 00:21:04.857 adrfam: ipv4 00:21:04.857 subtype: current discovery subsystem 00:21:04.857 treq: not specified, sq flow control disable supported 00:21:04.857 portid: 1 00:21:04.857 trsvcid: 4420 00:21:04.857 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:04.857 traddr: 10.0.0.1 00:21:04.857 eflags: none 00:21:04.857 sectype: none 00:21:04.857 =====Discovery Log Entry 1====== 00:21:04.857 trtype: tcp 00:21:04.857 adrfam: ipv4 00:21:04.857 subtype: nvme subsystem 00:21:04.857 treq: not specified, sq flow control disable supported 00:21:04.857 portid: 1 00:21:04.857 trsvcid: 4420 00:21:04.857 subnqn: nqn.2024-02.io.spdk:cnode0 00:21:04.857 traddr: 10.0.0.1 00:21:04.857 eflags: none 00:21:04.857 sectype: none 00:21:04.857 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:21:04.857 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:21:04.857 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:21:04.857 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:04.857 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:04.857 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:04.857 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:04.857 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:04.857 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjBkODE4NDY4MDdhMzNmMmM5Y2M5NGY5YTcwZTQzNTRmNDgzYWQzNTNiOWQ3NmNiMjvVtg==: 00:21:04.857 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTFkNTVhZjQwZWM5ZmY4ODU2NzEyMWZmNTI5NDk1Mzg5Yjk0YmFkZmI4NDk1MTE1/ttDVg==: 00:21:04.857 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:04.857 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:04.857 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjBkODE4NDY4MDdhMzNmMmM5Y2M5NGY5YTcwZTQzNTRmNDgzYWQzNTNiOWQ3NmNiMjvVtg==: 00:21:04.857 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTFkNTVhZjQwZWM5ZmY4ODU2NzEyMWZmNTI5NDk1Mzg5Yjk0YmFkZmI4NDk1MTE1/ttDVg==: ]] 00:21:04.857 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTFkNTVhZjQwZWM5ZmY4ODU2NzEyMWZmNTI5NDk1Mzg5Yjk0YmFkZmI4NDk1MTE1/ttDVg==: 00:21:04.857 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:21:04.857 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:21:04.857 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:21:04.857 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:04.857 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:21:04.857 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:04.857 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:21:04.857 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:04.857 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:04.857 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:04.857 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:04.857 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.857 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.857 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.857 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:04.857 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:04.857 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:04.857 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:04.857 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:04.857 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:04.857 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:04.857 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:04.857 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:04.857 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:04.857 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:04.857 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.857 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.857 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.119 nvme0n1 00:21:05.119 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.119 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:05.119 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.119 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.119 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:05.119 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.119 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.119 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:05.119 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.119 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.119 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.119 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:21:05.119 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:05.119 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:05.119 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:21:05.119 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:05.119 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:05.119 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:05.119 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:05.119 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njg5OGZjNmQ0YzFmZTc4MWYyYzAwMDQ0NjYwNjhmMDiCYnr7: 00:21:05.119 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmE1ZTVhYTlkNDVhOTkyYmIyMjM5MjZkN2JiZDhhYjE0ODdjMGU4YjA5NjgwMGNiODBlYzRlY2QzZjU3MjMxONTIAr8=: 00:21:05.119 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:05.119 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:05.119 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njg5OGZjNmQ0YzFmZTc4MWYyYzAwMDQ0NjYwNjhmMDiCYnr7: 00:21:05.119 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmE1ZTVhYTlkNDVhOTkyYmIyMjM5MjZkN2JiZDhhYjE0ODdjMGU4YjA5NjgwMGNiODBlYzRlY2QzZjU3MjMxONTIAr8=: ]] 00:21:05.119 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmE1ZTVhYTlkNDVhOTkyYmIyMjM5MjZkN2JiZDhhYjE0ODdjMGU4YjA5NjgwMGNiODBlYzRlY2QzZjU3MjMxONTIAr8=: 00:21:05.120 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:21:05.120 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:05.120 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:05.120 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:05.120 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:05.120 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:05.120 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:05.120 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.120 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.120 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.120 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:05.120 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:05.120 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:05.120 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:05.120 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:05.120 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:05.120 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:05.120 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:05.120 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:05.120 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:05.120 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:05.120 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.120 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.120 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.120 nvme0n1 00:21:05.120 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.120 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:05.120 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.120 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.120 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:05.120 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.383 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.383 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:05.383 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.383 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.383 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.383 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:05.383 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:05.383 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:05.383 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:05.383 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:05.383 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:05.383 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjBkODE4NDY4MDdhMzNmMmM5Y2M5NGY5YTcwZTQzNTRmNDgzYWQzNTNiOWQ3NmNiMjvVtg==: 00:21:05.383 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTFkNTVhZjQwZWM5ZmY4ODU2NzEyMWZmNTI5NDk1Mzg5Yjk0YmFkZmI4NDk1MTE1/ttDVg==: 00:21:05.383 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:05.383 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:05.383 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjBkODE4NDY4MDdhMzNmMmM5Y2M5NGY5YTcwZTQzNTRmNDgzYWQzNTNiOWQ3NmNiMjvVtg==: 00:21:05.383 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTFkNTVhZjQwZWM5ZmY4ODU2NzEyMWZmNTI5NDk1Mzg5Yjk0YmFkZmI4NDk1MTE1/ttDVg==: ]] 00:21:05.383 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTFkNTVhZjQwZWM5ZmY4ODU2NzEyMWZmNTI5NDk1Mzg5Yjk0YmFkZmI4NDk1MTE1/ttDVg==: 00:21:05.383 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:21:05.383 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:05.383 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:05.383 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:05.383 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:05.383 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:05.383 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:05.383 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.383 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.383 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.383 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:05.383 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:05.383 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:05.383 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:05.383 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:05.383 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:05.383 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:05.383 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:05.383 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:05.383 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:05.383 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:05.383 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.383 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.383 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.383 nvme0n1 00:21:05.383 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.383 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:05.383 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.383 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:05.383 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.383 09:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.383 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.383 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:05.383 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.383 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.383 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.383 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:05.383 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:21:05.383 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:05.383 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:05.383 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:05.383 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:05.383 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTRmOTYwOWNhNzRiYzhjOWNkZDRhYWZlNDA3ZjgxM2PFU+dB: 00:21:05.383 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDlkZjRlY2MyNjA5ZTRmNDA3ZjY3MzhiZTMwNmQzYjGgxGXD: 00:21:05.383 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:05.383 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:05.383 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTRmOTYwOWNhNzRiYzhjOWNkZDRhYWZlNDA3ZjgxM2PFU+dB: 00:21:05.383 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDlkZjRlY2MyNjA5ZTRmNDA3ZjY3MzhiZTMwNmQzYjGgxGXD: ]] 00:21:05.383 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDlkZjRlY2MyNjA5ZTRmNDA3ZjY3MzhiZTMwNmQzYjGgxGXD: 00:21:05.383 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:21:05.383 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:05.383 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:05.383 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:05.383 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:05.383 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:05.383 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:05.383 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.383 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.383 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.383 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:05.383 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:05.383 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:05.383 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:05.383 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:05.383 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:05.383 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:05.383 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:05.383 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:05.383 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:05.383 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:05.383 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.383 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.383 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.643 nvme0n1 00:21:05.643 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.643 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:05.643 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:05.643 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.643 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.643 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.643 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.643 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:05.643 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.643 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.643 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.643 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:05.643 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:21:05.643 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:05.643 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:05.643 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:05.643 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:05.643 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTMzZjNmNmVhYWIzYWI1NWIyNmRlMmY4ZGI4MjQ1ZWFmMTY5YTNjNGU2NjJiNDQ09flMPg==: 00:21:05.643 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmNlNzBmOTRiNzA2Zjg0M2MyNzYwYjdlMjY3OGQxM2KR8UvU: 00:21:05.643 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:05.643 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:05.643 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTMzZjNmNmVhYWIzYWI1NWIyNmRlMmY4ZGI4MjQ1ZWFmMTY5YTNjNGU2NjJiNDQ09flMPg==: 00:21:05.643 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmNlNzBmOTRiNzA2Zjg0M2MyNzYwYjdlMjY3OGQxM2KR8UvU: ]] 00:21:05.643 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmNlNzBmOTRiNzA2Zjg0M2MyNzYwYjdlMjY3OGQxM2KR8UvU: 00:21:05.643 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:21:05.643 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:05.643 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:05.643 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:05.643 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:05.643 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:05.644 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:05.644 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.644 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.644 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.644 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:05.644 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:05.644 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:05.644 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:05.644 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:05.644 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:05.644 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:05.644 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:05.644 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:05.644 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:05.644 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:05.644 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:05.644 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.644 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.644 nvme0n1 00:21:05.644 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.644 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:05.644 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.644 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.644 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:05.644 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmI2ZTIwZmYwNWZlNjUyNzJlZmVkMmUyMmQ4NDM3YTcxNWRhZTIzYTlmNjVhYTcwMmUzMzk2ZWVkYjdhMjMxY4TwsEM=: 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmI2ZTIwZmYwNWZlNjUyNzJlZmVkMmUyMmQ4NDM3YTcxNWRhZTIzYTlmNjVhYTcwMmUzMzk2ZWVkYjdhMjMxY4TwsEM=: 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.903 nvme0n1 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njg5OGZjNmQ0YzFmZTc4MWYyYzAwMDQ0NjYwNjhmMDiCYnr7: 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmE1ZTVhYTlkNDVhOTkyYmIyMjM5MjZkN2JiZDhhYjE0ODdjMGU4YjA5NjgwMGNiODBlYzRlY2QzZjU3MjMxONTIAr8=: 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:05.903 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:06.162 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njg5OGZjNmQ0YzFmZTc4MWYyYzAwMDQ0NjYwNjhmMDiCYnr7: 00:21:06.162 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmE1ZTVhYTlkNDVhOTkyYmIyMjM5MjZkN2JiZDhhYjE0ODdjMGU4YjA5NjgwMGNiODBlYzRlY2QzZjU3MjMxONTIAr8=: ]] 00:21:06.162 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmE1ZTVhYTlkNDVhOTkyYmIyMjM5MjZkN2JiZDhhYjE0ODdjMGU4YjA5NjgwMGNiODBlYzRlY2QzZjU3MjMxONTIAr8=: 00:21:06.162 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:21:06.162 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:06.162 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:06.163 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:06.163 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:06.163 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:06.163 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:06.163 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.163 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.163 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.163 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:06.163 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:06.163 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:06.163 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:06.163 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:06.163 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:06.163 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:06.163 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:06.163 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:06.163 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:06.163 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:06.163 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.163 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.163 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.423 nvme0n1 00:21:06.423 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.423 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:06.423 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.423 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:06.423 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.423 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.423 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.423 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:06.423 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.423 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.423 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.423 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:06.423 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:21:06.423 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:06.423 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:06.423 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:06.423 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:06.423 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjBkODE4NDY4MDdhMzNmMmM5Y2M5NGY5YTcwZTQzNTRmNDgzYWQzNTNiOWQ3NmNiMjvVtg==: 00:21:06.423 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTFkNTVhZjQwZWM5ZmY4ODU2NzEyMWZmNTI5NDk1Mzg5Yjk0YmFkZmI4NDk1MTE1/ttDVg==: 00:21:06.423 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:06.423 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:06.423 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjBkODE4NDY4MDdhMzNmMmM5Y2M5NGY5YTcwZTQzNTRmNDgzYWQzNTNiOWQ3NmNiMjvVtg==: 00:21:06.423 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTFkNTVhZjQwZWM5ZmY4ODU2NzEyMWZmNTI5NDk1Mzg5Yjk0YmFkZmI4NDk1MTE1/ttDVg==: ]] 00:21:06.423 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTFkNTVhZjQwZWM5ZmY4ODU2NzEyMWZmNTI5NDk1Mzg5Yjk0YmFkZmI4NDk1MTE1/ttDVg==: 00:21:06.423 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:21:06.423 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:06.423 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:06.423 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:06.423 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:06.423 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:06.423 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:06.423 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.423 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.423 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.423 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:06.423 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:06.423 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:06.423 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:06.424 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:06.424 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:06.424 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:06.424 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:06.424 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:06.424 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:06.424 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:06.424 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.424 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.424 09:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.424 nvme0n1 00:21:06.424 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.424 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:06.424 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:06.424 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.424 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.424 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.424 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.424 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:06.424 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.424 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.424 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.424 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:06.424 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:21:06.424 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:06.424 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:06.424 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:06.424 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:06.424 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTRmOTYwOWNhNzRiYzhjOWNkZDRhYWZlNDA3ZjgxM2PFU+dB: 00:21:06.424 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDlkZjRlY2MyNjA5ZTRmNDA3ZjY3MzhiZTMwNmQzYjGgxGXD: 00:21:06.424 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:06.424 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:06.424 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTRmOTYwOWNhNzRiYzhjOWNkZDRhYWZlNDA3ZjgxM2PFU+dB: 00:21:06.424 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDlkZjRlY2MyNjA5ZTRmNDA3ZjY3MzhiZTMwNmQzYjGgxGXD: ]] 00:21:06.424 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDlkZjRlY2MyNjA5ZTRmNDA3ZjY3MzhiZTMwNmQzYjGgxGXD: 00:21:06.424 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:21:06.424 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:06.424 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:06.424 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:06.424 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:06.424 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:06.424 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:06.424 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.424 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.684 nvme0n1 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTMzZjNmNmVhYWIzYWI1NWIyNmRlMmY4ZGI4MjQ1ZWFmMTY5YTNjNGU2NjJiNDQ09flMPg==: 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmNlNzBmOTRiNzA2Zjg0M2MyNzYwYjdlMjY3OGQxM2KR8UvU: 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTMzZjNmNmVhYWIzYWI1NWIyNmRlMmY4ZGI4MjQ1ZWFmMTY5YTNjNGU2NjJiNDQ09flMPg==: 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmNlNzBmOTRiNzA2Zjg0M2MyNzYwYjdlMjY3OGQxM2KR8UvU: ]] 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmNlNzBmOTRiNzA2Zjg0M2MyNzYwYjdlMjY3OGQxM2KR8UvU: 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.684 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.944 nvme0n1 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmI2ZTIwZmYwNWZlNjUyNzJlZmVkMmUyMmQ4NDM3YTcxNWRhZTIzYTlmNjVhYTcwMmUzMzk2ZWVkYjdhMjMxY4TwsEM=: 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmI2ZTIwZmYwNWZlNjUyNzJlZmVkMmUyMmQ4NDM3YTcxNWRhZTIzYTlmNjVhYTcwMmUzMzk2ZWVkYjdhMjMxY4TwsEM=: 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.944 nvme0n1 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.944 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.945 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:06.945 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:06.945 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:21:06.945 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:06.945 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:06.945 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:06.945 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:06.945 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njg5OGZjNmQ0YzFmZTc4MWYyYzAwMDQ0NjYwNjhmMDiCYnr7: 00:21:06.945 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmE1ZTVhYTlkNDVhOTkyYmIyMjM5MjZkN2JiZDhhYjE0ODdjMGU4YjA5NjgwMGNiODBlYzRlY2QzZjU3MjMxONTIAr8=: 00:21:06.945 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:06.945 09:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:07.514 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njg5OGZjNmQ0YzFmZTc4MWYyYzAwMDQ0NjYwNjhmMDiCYnr7: 00:21:07.514 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmE1ZTVhYTlkNDVhOTkyYmIyMjM5MjZkN2JiZDhhYjE0ODdjMGU4YjA5NjgwMGNiODBlYzRlY2QzZjU3MjMxONTIAr8=: ]] 00:21:07.514 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmE1ZTVhYTlkNDVhOTkyYmIyMjM5MjZkN2JiZDhhYjE0ODdjMGU4YjA5NjgwMGNiODBlYzRlY2QzZjU3MjMxONTIAr8=: 00:21:07.514 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:21:07.514 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:07.514 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:07.514 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:07.514 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:07.514 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:07.514 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:07.514 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.514 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.514 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.514 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:07.514 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:07.514 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:07.514 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:07.514 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:07.514 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:07.514 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:07.514 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:07.514 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:07.514 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:07.514 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:07.514 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.514 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.514 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.773 nvme0n1 00:21:07.773 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.773 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:07.773 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.773 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:07.773 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.773 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.773 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.773 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:07.773 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.773 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.773 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.773 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:07.773 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:21:07.773 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:07.773 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:07.773 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:07.773 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:07.773 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjBkODE4NDY4MDdhMzNmMmM5Y2M5NGY5YTcwZTQzNTRmNDgzYWQzNTNiOWQ3NmNiMjvVtg==: 00:21:07.773 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTFkNTVhZjQwZWM5ZmY4ODU2NzEyMWZmNTI5NDk1Mzg5Yjk0YmFkZmI4NDk1MTE1/ttDVg==: 00:21:07.773 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:07.773 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:07.773 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjBkODE4NDY4MDdhMzNmMmM5Y2M5NGY5YTcwZTQzNTRmNDgzYWQzNTNiOWQ3NmNiMjvVtg==: 00:21:07.773 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTFkNTVhZjQwZWM5ZmY4ODU2NzEyMWZmNTI5NDk1Mzg5Yjk0YmFkZmI4NDk1MTE1/ttDVg==: ]] 00:21:07.774 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTFkNTVhZjQwZWM5ZmY4ODU2NzEyMWZmNTI5NDk1Mzg5Yjk0YmFkZmI4NDk1MTE1/ttDVg==: 00:21:07.774 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:21:07.774 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:07.774 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:07.774 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:07.774 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:07.774 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:07.774 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:07.774 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.774 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.774 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.774 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:07.774 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:07.774 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:07.774 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:07.774 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:07.774 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:07.774 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:07.774 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:07.774 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:07.774 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:07.774 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:07.774 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.774 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.774 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.032 nvme0n1 00:21:08.032 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.032 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:08.032 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:08.032 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.032 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.032 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.033 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.033 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:08.033 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.033 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.033 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.033 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:08.033 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:21:08.033 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:08.033 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:08.033 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:08.033 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:08.033 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTRmOTYwOWNhNzRiYzhjOWNkZDRhYWZlNDA3ZjgxM2PFU+dB: 00:21:08.033 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDlkZjRlY2MyNjA5ZTRmNDA3ZjY3MzhiZTMwNmQzYjGgxGXD: 00:21:08.033 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:08.033 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:08.033 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTRmOTYwOWNhNzRiYzhjOWNkZDRhYWZlNDA3ZjgxM2PFU+dB: 00:21:08.033 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDlkZjRlY2MyNjA5ZTRmNDA3ZjY3MzhiZTMwNmQzYjGgxGXD: ]] 00:21:08.033 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDlkZjRlY2MyNjA5ZTRmNDA3ZjY3MzhiZTMwNmQzYjGgxGXD: 00:21:08.033 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:21:08.033 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:08.033 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:08.033 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:08.033 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:08.033 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:08.033 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:08.033 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.033 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.033 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.033 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:08.033 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:08.033 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:08.033 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:08.033 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:08.033 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:08.033 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:08.033 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:08.033 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:08.033 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:08.033 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:08.033 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.033 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.033 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.293 nvme0n1 00:21:08.293 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.293 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:08.293 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:08.293 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.293 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.293 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.293 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.293 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:08.293 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.293 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.293 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.293 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:08.293 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:21:08.293 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:08.293 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:08.293 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:08.293 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:08.293 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTMzZjNmNmVhYWIzYWI1NWIyNmRlMmY4ZGI4MjQ1ZWFmMTY5YTNjNGU2NjJiNDQ09flMPg==: 00:21:08.293 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmNlNzBmOTRiNzA2Zjg0M2MyNzYwYjdlMjY3OGQxM2KR8UvU: 00:21:08.293 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:08.293 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:08.293 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTMzZjNmNmVhYWIzYWI1NWIyNmRlMmY4ZGI4MjQ1ZWFmMTY5YTNjNGU2NjJiNDQ09flMPg==: 00:21:08.293 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmNlNzBmOTRiNzA2Zjg0M2MyNzYwYjdlMjY3OGQxM2KR8UvU: ]] 00:21:08.293 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmNlNzBmOTRiNzA2Zjg0M2MyNzYwYjdlMjY3OGQxM2KR8UvU: 00:21:08.293 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:21:08.293 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:08.293 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:08.293 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:08.293 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:08.293 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:08.293 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:08.293 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.293 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.293 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.293 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:08.293 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:08.293 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:08.293 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:08.293 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:08.293 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:08.293 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:08.293 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:08.293 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:08.293 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:08.293 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:08.293 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:08.293 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.293 09:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.553 nvme0n1 00:21:08.553 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.553 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:08.553 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.553 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:08.553 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.553 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.553 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.553 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:08.553 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.553 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.553 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.553 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:08.553 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:21:08.553 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:08.553 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:08.553 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:08.553 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:08.553 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmI2ZTIwZmYwNWZlNjUyNzJlZmVkMmUyMmQ4NDM3YTcxNWRhZTIzYTlmNjVhYTcwMmUzMzk2ZWVkYjdhMjMxY4TwsEM=: 00:21:08.553 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:08.553 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:08.553 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:08.553 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmI2ZTIwZmYwNWZlNjUyNzJlZmVkMmUyMmQ4NDM3YTcxNWRhZTIzYTlmNjVhYTcwMmUzMzk2ZWVkYjdhMjMxY4TwsEM=: 00:21:08.553 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:08.553 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:21:08.553 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:08.553 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:08.553 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:08.553 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:08.553 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:08.553 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:08.553 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.553 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.553 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.553 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:08.553 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:08.553 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:08.553 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:08.553 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:08.553 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:08.553 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:08.553 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:08.553 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:08.553 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:08.553 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:08.553 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:08.553 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.553 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.813 nvme0n1 00:21:08.813 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.813 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:08.813 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.813 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:08.813 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.813 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.813 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.813 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:08.813 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.813 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.813 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.813 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:08.813 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:08.813 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:21:08.813 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:08.813 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:08.813 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:08.813 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:08.813 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njg5OGZjNmQ0YzFmZTc4MWYyYzAwMDQ0NjYwNjhmMDiCYnr7: 00:21:08.813 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmE1ZTVhYTlkNDVhOTkyYmIyMjM5MjZkN2JiZDhhYjE0ODdjMGU4YjA5NjgwMGNiODBlYzRlY2QzZjU3MjMxONTIAr8=: 00:21:08.813 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:08.813 09:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:10.192 09:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njg5OGZjNmQ0YzFmZTc4MWYyYzAwMDQ0NjYwNjhmMDiCYnr7: 00:21:10.192 09:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmE1ZTVhYTlkNDVhOTkyYmIyMjM5MjZkN2JiZDhhYjE0ODdjMGU4YjA5NjgwMGNiODBlYzRlY2QzZjU3MjMxONTIAr8=: ]] 00:21:10.192 09:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmE1ZTVhYTlkNDVhOTkyYmIyMjM5MjZkN2JiZDhhYjE0ODdjMGU4YjA5NjgwMGNiODBlYzRlY2QzZjU3MjMxONTIAr8=: 00:21:10.192 09:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:21:10.192 09:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:10.192 09:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:10.192 09:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:10.192 09:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:10.192 09:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:10.192 09:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:10.192 09:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.192 09:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.192 09:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.192 09:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:10.192 09:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:10.192 09:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:10.192 09:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:10.192 09:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:10.192 09:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:10.192 09:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:10.192 09:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:10.192 09:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:10.192 09:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:10.192 09:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:10.192 09:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:10.192 09:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.192 09:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.451 nvme0n1 00:21:10.451 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.451 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:10.451 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.451 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:10.451 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.710 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.710 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.710 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:10.710 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.710 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.710 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.710 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:10.710 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:21:10.710 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:10.710 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:10.710 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:10.710 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:10.710 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjBkODE4NDY4MDdhMzNmMmM5Y2M5NGY5YTcwZTQzNTRmNDgzYWQzNTNiOWQ3NmNiMjvVtg==: 00:21:10.710 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTFkNTVhZjQwZWM5ZmY4ODU2NzEyMWZmNTI5NDk1Mzg5Yjk0YmFkZmI4NDk1MTE1/ttDVg==: 00:21:10.710 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:10.710 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:10.710 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjBkODE4NDY4MDdhMzNmMmM5Y2M5NGY5YTcwZTQzNTRmNDgzYWQzNTNiOWQ3NmNiMjvVtg==: 00:21:10.710 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTFkNTVhZjQwZWM5ZmY4ODU2NzEyMWZmNTI5NDk1Mzg5Yjk0YmFkZmI4NDk1MTE1/ttDVg==: ]] 00:21:10.710 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTFkNTVhZjQwZWM5ZmY4ODU2NzEyMWZmNTI5NDk1Mzg5Yjk0YmFkZmI4NDk1MTE1/ttDVg==: 00:21:10.710 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:21:10.710 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:10.710 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:10.710 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:10.710 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:10.710 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:10.710 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:10.710 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.710 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.710 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.710 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:10.710 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:10.710 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:10.710 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:10.710 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:10.710 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:10.710 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:10.710 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:10.710 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:10.710 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:10.710 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:10.710 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.710 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.710 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.969 nvme0n1 00:21:10.969 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.969 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:10.969 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.969 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:10.969 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.969 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.969 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.969 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:10.969 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.969 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.969 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.969 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:10.969 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:21:10.969 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:10.969 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:10.969 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:10.969 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:10.969 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTRmOTYwOWNhNzRiYzhjOWNkZDRhYWZlNDA3ZjgxM2PFU+dB: 00:21:10.969 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDlkZjRlY2MyNjA5ZTRmNDA3ZjY3MzhiZTMwNmQzYjGgxGXD: 00:21:10.969 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:10.969 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:10.969 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTRmOTYwOWNhNzRiYzhjOWNkZDRhYWZlNDA3ZjgxM2PFU+dB: 00:21:10.969 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDlkZjRlY2MyNjA5ZTRmNDA3ZjY3MzhiZTMwNmQzYjGgxGXD: ]] 00:21:10.969 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDlkZjRlY2MyNjA5ZTRmNDA3ZjY3MzhiZTMwNmQzYjGgxGXD: 00:21:10.969 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:21:10.969 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:10.969 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:10.969 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:10.969 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:10.969 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:10.969 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:10.969 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.969 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.969 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.969 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:10.969 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:10.969 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:10.969 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:10.969 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:10.969 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:10.969 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:10.969 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:10.969 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:10.969 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:10.969 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:10.969 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.969 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.969 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.537 nvme0n1 00:21:11.537 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.537 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:11.537 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.537 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:11.537 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.537 09:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.537 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.537 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:11.537 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.537 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.537 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.537 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:11.537 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:21:11.537 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:11.537 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:11.537 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:11.537 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:11.537 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTMzZjNmNmVhYWIzYWI1NWIyNmRlMmY4ZGI4MjQ1ZWFmMTY5YTNjNGU2NjJiNDQ09flMPg==: 00:21:11.537 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmNlNzBmOTRiNzA2Zjg0M2MyNzYwYjdlMjY3OGQxM2KR8UvU: 00:21:11.537 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:11.537 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:11.537 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTMzZjNmNmVhYWIzYWI1NWIyNmRlMmY4ZGI4MjQ1ZWFmMTY5YTNjNGU2NjJiNDQ09flMPg==: 00:21:11.537 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmNlNzBmOTRiNzA2Zjg0M2MyNzYwYjdlMjY3OGQxM2KR8UvU: ]] 00:21:11.537 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmNlNzBmOTRiNzA2Zjg0M2MyNzYwYjdlMjY3OGQxM2KR8UvU: 00:21:11.537 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:21:11.537 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:11.537 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:11.537 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:11.537 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:11.537 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:11.537 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:11.537 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.537 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.537 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.537 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:11.537 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:11.537 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:11.537 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:11.537 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:11.537 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:11.537 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:11.537 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:11.537 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:11.537 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:11.537 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:11.537 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:11.537 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.537 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.796 nvme0n1 00:21:11.796 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.796 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:11.796 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:11.796 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.796 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.796 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.796 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.796 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:11.796 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.796 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.796 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.796 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:11.796 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:21:11.796 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:11.796 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:11.796 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:11.796 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:11.796 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmI2ZTIwZmYwNWZlNjUyNzJlZmVkMmUyMmQ4NDM3YTcxNWRhZTIzYTlmNjVhYTcwMmUzMzk2ZWVkYjdhMjMxY4TwsEM=: 00:21:11.796 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:11.796 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:11.796 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:11.796 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmI2ZTIwZmYwNWZlNjUyNzJlZmVkMmUyMmQ4NDM3YTcxNWRhZTIzYTlmNjVhYTcwMmUzMzk2ZWVkYjdhMjMxY4TwsEM=: 00:21:11.796 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:11.796 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:21:11.796 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:11.796 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:11.796 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:11.796 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:11.796 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:11.796 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:11.796 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.796 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.796 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.796 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:11.796 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:11.796 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:11.796 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:11.796 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:11.796 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:11.796 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:11.796 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:11.796 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:11.796 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:11.796 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:11.796 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:11.796 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.796 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.361 nvme0n1 00:21:12.361 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.361 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:12.361 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:12.361 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.361 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.361 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.361 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.361 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:12.361 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.361 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.361 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.361 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:12.361 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:12.361 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:21:12.361 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:12.361 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:12.361 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:12.361 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:12.361 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njg5OGZjNmQ0YzFmZTc4MWYyYzAwMDQ0NjYwNjhmMDiCYnr7: 00:21:12.362 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmE1ZTVhYTlkNDVhOTkyYmIyMjM5MjZkN2JiZDhhYjE0ODdjMGU4YjA5NjgwMGNiODBlYzRlY2QzZjU3MjMxONTIAr8=: 00:21:12.362 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:12.362 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:12.362 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njg5OGZjNmQ0YzFmZTc4MWYyYzAwMDQ0NjYwNjhmMDiCYnr7: 00:21:12.362 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmE1ZTVhYTlkNDVhOTkyYmIyMjM5MjZkN2JiZDhhYjE0ODdjMGU4YjA5NjgwMGNiODBlYzRlY2QzZjU3MjMxONTIAr8=: ]] 00:21:12.362 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmE1ZTVhYTlkNDVhOTkyYmIyMjM5MjZkN2JiZDhhYjE0ODdjMGU4YjA5NjgwMGNiODBlYzRlY2QzZjU3MjMxONTIAr8=: 00:21:12.362 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:21:12.362 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:12.362 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:12.362 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:12.362 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:12.362 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:12.362 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:12.362 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.362 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.362 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.362 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:12.362 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:12.362 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:12.362 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:12.362 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:12.362 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:12.362 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:12.362 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:12.362 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:12.362 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:12.362 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:12.362 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.362 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.362 09:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.027 nvme0n1 00:21:13.027 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.027 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:13.027 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.027 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:13.027 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.027 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.027 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.027 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:13.027 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.027 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.027 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.027 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:13.027 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:21:13.027 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:13.027 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:13.027 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:13.027 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:13.027 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjBkODE4NDY4MDdhMzNmMmM5Y2M5NGY5YTcwZTQzNTRmNDgzYWQzNTNiOWQ3NmNiMjvVtg==: 00:21:13.027 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTFkNTVhZjQwZWM5ZmY4ODU2NzEyMWZmNTI5NDk1Mzg5Yjk0YmFkZmI4NDk1MTE1/ttDVg==: 00:21:13.028 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:13.028 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:13.028 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjBkODE4NDY4MDdhMzNmMmM5Y2M5NGY5YTcwZTQzNTRmNDgzYWQzNTNiOWQ3NmNiMjvVtg==: 00:21:13.028 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTFkNTVhZjQwZWM5ZmY4ODU2NzEyMWZmNTI5NDk1Mzg5Yjk0YmFkZmI4NDk1MTE1/ttDVg==: ]] 00:21:13.028 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTFkNTVhZjQwZWM5ZmY4ODU2NzEyMWZmNTI5NDk1Mzg5Yjk0YmFkZmI4NDk1MTE1/ttDVg==: 00:21:13.028 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:21:13.028 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:13.028 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:13.028 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:13.028 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:13.028 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:13.028 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:13.028 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.028 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.028 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.028 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:13.028 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:13.028 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:13.028 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:13.028 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:13.028 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:13.028 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:13.028 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:13.028 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:13.028 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:13.028 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:13.028 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.028 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.028 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.313 nvme0n1 00:21:13.313 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.313 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:13.313 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.313 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:13.313 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.313 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.313 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.313 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:13.313 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.313 09:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.313 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.313 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:13.313 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:21:13.313 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:13.313 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:13.313 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:13.313 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:13.313 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTRmOTYwOWNhNzRiYzhjOWNkZDRhYWZlNDA3ZjgxM2PFU+dB: 00:21:13.313 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDlkZjRlY2MyNjA5ZTRmNDA3ZjY3MzhiZTMwNmQzYjGgxGXD: 00:21:13.313 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:13.313 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:13.313 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTRmOTYwOWNhNzRiYzhjOWNkZDRhYWZlNDA3ZjgxM2PFU+dB: 00:21:13.314 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDlkZjRlY2MyNjA5ZTRmNDA3ZjY3MzhiZTMwNmQzYjGgxGXD: ]] 00:21:13.314 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDlkZjRlY2MyNjA5ZTRmNDA3ZjY3MzhiZTMwNmQzYjGgxGXD: 00:21:13.314 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:21:13.314 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:13.314 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:13.314 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:13.314 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:13.314 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:13.314 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:13.314 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.314 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.314 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.314 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:13.314 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:13.314 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:13.314 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:13.314 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:13.314 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:13.314 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:13.314 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:13.314 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:13.314 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:13.314 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:13.573 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.573 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.573 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.832 nvme0n1 00:21:13.832 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.832 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:13.832 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.832 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:13.832 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.832 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.091 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.091 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:14.091 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.091 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.091 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.091 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:14.091 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:21:14.091 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:14.091 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:14.091 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:14.091 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:14.092 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTMzZjNmNmVhYWIzYWI1NWIyNmRlMmY4ZGI4MjQ1ZWFmMTY5YTNjNGU2NjJiNDQ09flMPg==: 00:21:14.092 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmNlNzBmOTRiNzA2Zjg0M2MyNzYwYjdlMjY3OGQxM2KR8UvU: 00:21:14.092 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:14.092 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:14.092 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTMzZjNmNmVhYWIzYWI1NWIyNmRlMmY4ZGI4MjQ1ZWFmMTY5YTNjNGU2NjJiNDQ09flMPg==: 00:21:14.092 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmNlNzBmOTRiNzA2Zjg0M2MyNzYwYjdlMjY3OGQxM2KR8UvU: ]] 00:21:14.092 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmNlNzBmOTRiNzA2Zjg0M2MyNzYwYjdlMjY3OGQxM2KR8UvU: 00:21:14.092 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:21:14.092 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:14.092 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:14.092 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:14.092 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:14.092 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:14.092 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:14.092 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.092 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.092 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.092 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:14.092 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:14.092 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:14.092 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:14.092 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:14.092 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:14.092 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:14.092 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:14.092 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:14.092 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:14.092 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:14.092 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:14.092 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.092 09:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.659 nvme0n1 00:21:14.659 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.659 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:14.659 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.659 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:14.659 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.659 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.659 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.659 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:14.659 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.659 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.659 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.660 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:14.660 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:21:14.660 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:14.660 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:14.660 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:14.660 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:14.660 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmI2ZTIwZmYwNWZlNjUyNzJlZmVkMmUyMmQ4NDM3YTcxNWRhZTIzYTlmNjVhYTcwMmUzMzk2ZWVkYjdhMjMxY4TwsEM=: 00:21:14.660 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:14.660 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:14.660 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:14.660 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmI2ZTIwZmYwNWZlNjUyNzJlZmVkMmUyMmQ4NDM3YTcxNWRhZTIzYTlmNjVhYTcwMmUzMzk2ZWVkYjdhMjMxY4TwsEM=: 00:21:14.660 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:14.660 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:21:14.660 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:14.660 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:14.660 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:14.660 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:14.660 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:14.660 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:14.660 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.660 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.660 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.660 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:14.660 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:14.660 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:14.660 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:14.660 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:14.660 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:14.660 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:14.660 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:14.660 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:14.660 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:14.660 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:14.660 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:14.660 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.660 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.228 nvme0n1 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njg5OGZjNmQ0YzFmZTc4MWYyYzAwMDQ0NjYwNjhmMDiCYnr7: 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmE1ZTVhYTlkNDVhOTkyYmIyMjM5MjZkN2JiZDhhYjE0ODdjMGU4YjA5NjgwMGNiODBlYzRlY2QzZjU3MjMxONTIAr8=: 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njg5OGZjNmQ0YzFmZTc4MWYyYzAwMDQ0NjYwNjhmMDiCYnr7: 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmE1ZTVhYTlkNDVhOTkyYmIyMjM5MjZkN2JiZDhhYjE0ODdjMGU4YjA5NjgwMGNiODBlYzRlY2QzZjU3MjMxONTIAr8=: ]] 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmE1ZTVhYTlkNDVhOTkyYmIyMjM5MjZkN2JiZDhhYjE0ODdjMGU4YjA5NjgwMGNiODBlYzRlY2QzZjU3MjMxONTIAr8=: 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.228 nvme0n1 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjBkODE4NDY4MDdhMzNmMmM5Y2M5NGY5YTcwZTQzNTRmNDgzYWQzNTNiOWQ3NmNiMjvVtg==: 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTFkNTVhZjQwZWM5ZmY4ODU2NzEyMWZmNTI5NDk1Mzg5Yjk0YmFkZmI4NDk1MTE1/ttDVg==: 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjBkODE4NDY4MDdhMzNmMmM5Y2M5NGY5YTcwZTQzNTRmNDgzYWQzNTNiOWQ3NmNiMjvVtg==: 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTFkNTVhZjQwZWM5ZmY4ODU2NzEyMWZmNTI5NDk1Mzg5Yjk0YmFkZmI4NDk1MTE1/ttDVg==: ]] 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTFkNTVhZjQwZWM5ZmY4ODU2NzEyMWZmNTI5NDk1Mzg5Yjk0YmFkZmI4NDk1MTE1/ttDVg==: 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.228 09:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.487 nvme0n1 00:21:15.487 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.487 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:15.487 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:15.487 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.487 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.487 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.487 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.487 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:15.487 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.487 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.487 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.487 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:15.487 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:21:15.487 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:15.487 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:15.487 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:15.487 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:15.487 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTRmOTYwOWNhNzRiYzhjOWNkZDRhYWZlNDA3ZjgxM2PFU+dB: 00:21:15.487 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDlkZjRlY2MyNjA5ZTRmNDA3ZjY3MzhiZTMwNmQzYjGgxGXD: 00:21:15.487 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:15.487 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:15.487 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTRmOTYwOWNhNzRiYzhjOWNkZDRhYWZlNDA3ZjgxM2PFU+dB: 00:21:15.487 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDlkZjRlY2MyNjA5ZTRmNDA3ZjY3MzhiZTMwNmQzYjGgxGXD: ]] 00:21:15.487 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDlkZjRlY2MyNjA5ZTRmNDA3ZjY3MzhiZTMwNmQzYjGgxGXD: 00:21:15.487 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:21:15.487 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:15.487 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:15.487 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:15.487 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:15.487 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:15.487 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:15.487 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.487 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.487 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.487 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:15.487 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:15.487 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:15.487 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:15.487 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:15.487 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:15.487 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:15.487 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:15.488 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:15.488 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:15.488 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:15.488 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.488 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.488 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.488 nvme0n1 00:21:15.488 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.488 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:15.488 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:15.488 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.488 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.746 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.746 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.746 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:15.746 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.746 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.746 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.746 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:15.746 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:21:15.746 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:15.746 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:15.746 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:15.746 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:15.746 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTMzZjNmNmVhYWIzYWI1NWIyNmRlMmY4ZGI4MjQ1ZWFmMTY5YTNjNGU2NjJiNDQ09flMPg==: 00:21:15.746 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmNlNzBmOTRiNzA2Zjg0M2MyNzYwYjdlMjY3OGQxM2KR8UvU: 00:21:15.746 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:15.746 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:15.746 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTMzZjNmNmVhYWIzYWI1NWIyNmRlMmY4ZGI4MjQ1ZWFmMTY5YTNjNGU2NjJiNDQ09flMPg==: 00:21:15.746 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmNlNzBmOTRiNzA2Zjg0M2MyNzYwYjdlMjY3OGQxM2KR8UvU: ]] 00:21:15.746 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmNlNzBmOTRiNzA2Zjg0M2MyNzYwYjdlMjY3OGQxM2KR8UvU: 00:21:15.746 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:21:15.746 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:15.746 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:15.746 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:15.746 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:15.746 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:15.746 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:15.746 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.746 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.746 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.746 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:15.746 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:15.746 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:15.746 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:15.746 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:15.746 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:15.746 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:15.746 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:15.746 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:15.746 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:15.746 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:15.746 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:15.746 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.746 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.746 nvme0n1 00:21:15.746 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.746 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:15.746 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.746 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:15.746 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.746 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.746 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.747 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:15.747 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.747 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.747 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.747 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:15.747 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:21:15.747 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:15.747 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:15.747 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:15.747 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:15.747 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmI2ZTIwZmYwNWZlNjUyNzJlZmVkMmUyMmQ4NDM3YTcxNWRhZTIzYTlmNjVhYTcwMmUzMzk2ZWVkYjdhMjMxY4TwsEM=: 00:21:15.747 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:15.747 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:15.747 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:15.747 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmI2ZTIwZmYwNWZlNjUyNzJlZmVkMmUyMmQ4NDM3YTcxNWRhZTIzYTlmNjVhYTcwMmUzMzk2ZWVkYjdhMjMxY4TwsEM=: 00:21:15.747 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:15.747 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:21:15.747 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:15.747 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:15.747 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:15.747 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:15.747 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:15.747 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:15.747 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.747 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.747 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.747 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:15.747 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:15.747 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:15.747 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:15.747 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:15.747 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:15.747 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:15.747 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:15.747 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:15.747 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:15.747 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:15.747 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:15.747 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.747 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.007 nvme0n1 00:21:16.007 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.007 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:16.007 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:16.007 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.007 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.007 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.007 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.007 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:16.007 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.007 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.007 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.007 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:16.007 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:16.007 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:21:16.007 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:16.007 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:16.007 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:16.007 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:16.007 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njg5OGZjNmQ0YzFmZTc4MWYyYzAwMDQ0NjYwNjhmMDiCYnr7: 00:21:16.007 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmE1ZTVhYTlkNDVhOTkyYmIyMjM5MjZkN2JiZDhhYjE0ODdjMGU4YjA5NjgwMGNiODBlYzRlY2QzZjU3MjMxONTIAr8=: 00:21:16.007 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:16.007 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:16.007 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njg5OGZjNmQ0YzFmZTc4MWYyYzAwMDQ0NjYwNjhmMDiCYnr7: 00:21:16.007 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmE1ZTVhYTlkNDVhOTkyYmIyMjM5MjZkN2JiZDhhYjE0ODdjMGU4YjA5NjgwMGNiODBlYzRlY2QzZjU3MjMxONTIAr8=: ]] 00:21:16.007 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmE1ZTVhYTlkNDVhOTkyYmIyMjM5MjZkN2JiZDhhYjE0ODdjMGU4YjA5NjgwMGNiODBlYzRlY2QzZjU3MjMxONTIAr8=: 00:21:16.007 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:21:16.007 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:16.007 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:16.007 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:16.007 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:16.007 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:16.007 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:16.007 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.007 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.007 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.007 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:16.007 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:16.007 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:16.007 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:16.007 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:16.007 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:16.007 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:16.007 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:16.007 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:16.007 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:16.007 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:16.007 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:16.007 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.007 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.007 nvme0n1 00:21:16.266 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.266 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:16.266 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.266 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.266 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:16.266 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.266 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.266 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:16.266 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.266 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.266 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.266 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:16.266 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:21:16.266 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:16.266 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:16.266 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:16.266 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:16.266 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjBkODE4NDY4MDdhMzNmMmM5Y2M5NGY5YTcwZTQzNTRmNDgzYWQzNTNiOWQ3NmNiMjvVtg==: 00:21:16.266 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTFkNTVhZjQwZWM5ZmY4ODU2NzEyMWZmNTI5NDk1Mzg5Yjk0YmFkZmI4NDk1MTE1/ttDVg==: 00:21:16.266 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:16.266 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:16.266 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjBkODE4NDY4MDdhMzNmMmM5Y2M5NGY5YTcwZTQzNTRmNDgzYWQzNTNiOWQ3NmNiMjvVtg==: 00:21:16.266 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTFkNTVhZjQwZWM5ZmY4ODU2NzEyMWZmNTI5NDk1Mzg5Yjk0YmFkZmI4NDk1MTE1/ttDVg==: ]] 00:21:16.266 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTFkNTVhZjQwZWM5ZmY4ODU2NzEyMWZmNTI5NDk1Mzg5Yjk0YmFkZmI4NDk1MTE1/ttDVg==: 00:21:16.266 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:21:16.266 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:16.266 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:16.266 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:16.266 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:16.266 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:16.266 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:16.266 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.266 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.267 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.267 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:16.267 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:16.267 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:16.267 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:16.267 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:16.267 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:16.267 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:16.267 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:16.267 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:16.267 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:16.267 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:16.267 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.267 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.267 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.267 nvme0n1 00:21:16.267 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.267 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:16.267 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.267 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:16.267 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.267 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.267 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.267 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:16.267 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.267 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.267 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.267 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:16.267 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:21:16.267 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:16.267 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:16.267 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:16.267 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:16.267 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTRmOTYwOWNhNzRiYzhjOWNkZDRhYWZlNDA3ZjgxM2PFU+dB: 00:21:16.267 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDlkZjRlY2MyNjA5ZTRmNDA3ZjY3MzhiZTMwNmQzYjGgxGXD: 00:21:16.267 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:16.267 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:16.267 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTRmOTYwOWNhNzRiYzhjOWNkZDRhYWZlNDA3ZjgxM2PFU+dB: 00:21:16.267 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDlkZjRlY2MyNjA5ZTRmNDA3ZjY3MzhiZTMwNmQzYjGgxGXD: ]] 00:21:16.267 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDlkZjRlY2MyNjA5ZTRmNDA3ZjY3MzhiZTMwNmQzYjGgxGXD: 00:21:16.267 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:21:16.526 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:16.526 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:16.526 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:16.526 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:16.526 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:16.526 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:16.526 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.526 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.526 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.526 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:16.526 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:16.526 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:16.526 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:16.526 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:16.526 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:16.526 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:16.526 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:16.526 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:16.526 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:16.526 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:16.526 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.526 09:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.526 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.526 nvme0n1 00:21:16.526 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.526 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:16.526 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.526 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:16.526 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.526 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.526 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.526 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:16.526 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.526 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.526 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.526 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:16.526 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:21:16.526 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:16.526 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:16.526 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:16.526 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:16.526 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTMzZjNmNmVhYWIzYWI1NWIyNmRlMmY4ZGI4MjQ1ZWFmMTY5YTNjNGU2NjJiNDQ09flMPg==: 00:21:16.526 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmNlNzBmOTRiNzA2Zjg0M2MyNzYwYjdlMjY3OGQxM2KR8UvU: 00:21:16.526 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:16.526 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:16.526 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTMzZjNmNmVhYWIzYWI1NWIyNmRlMmY4ZGI4MjQ1ZWFmMTY5YTNjNGU2NjJiNDQ09flMPg==: 00:21:16.526 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmNlNzBmOTRiNzA2Zjg0M2MyNzYwYjdlMjY3OGQxM2KR8UvU: ]] 00:21:16.526 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmNlNzBmOTRiNzA2Zjg0M2MyNzYwYjdlMjY3OGQxM2KR8UvU: 00:21:16.526 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:21:16.526 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:16.526 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:16.526 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:16.526 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:16.526 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:16.526 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:16.526 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.527 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.527 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.527 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:16.527 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:16.527 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:16.527 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:16.527 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:16.527 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:16.527 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:16.527 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:16.527 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:16.527 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:16.527 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:16.527 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:16.527 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.527 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.785 nvme0n1 00:21:16.785 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.785 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:16.785 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.785 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:16.785 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.785 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.785 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.785 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:16.785 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.785 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.785 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.785 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:16.785 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:21:16.785 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:16.785 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:16.785 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:16.785 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:16.785 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmI2ZTIwZmYwNWZlNjUyNzJlZmVkMmUyMmQ4NDM3YTcxNWRhZTIzYTlmNjVhYTcwMmUzMzk2ZWVkYjdhMjMxY4TwsEM=: 00:21:16.785 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:16.785 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:16.785 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:16.785 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmI2ZTIwZmYwNWZlNjUyNzJlZmVkMmUyMmQ4NDM3YTcxNWRhZTIzYTlmNjVhYTcwMmUzMzk2ZWVkYjdhMjMxY4TwsEM=: 00:21:16.785 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:16.785 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:21:16.785 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:16.785 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:16.785 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:16.785 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:16.785 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:16.785 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:16.785 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.785 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.785 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.785 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:16.785 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:16.785 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:16.785 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:16.785 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:16.785 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:16.785 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:16.785 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:16.785 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:16.785 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:16.785 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:16.785 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:16.785 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.785 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.785 nvme0n1 00:21:16.785 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.785 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:16.785 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.785 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:16.785 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.785 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.045 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.045 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:17.045 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.045 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.045 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.045 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:17.045 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:17.045 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:21:17.045 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:17.045 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:17.045 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:17.045 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:17.045 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njg5OGZjNmQ0YzFmZTc4MWYyYzAwMDQ0NjYwNjhmMDiCYnr7: 00:21:17.045 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmE1ZTVhYTlkNDVhOTkyYmIyMjM5MjZkN2JiZDhhYjE0ODdjMGU4YjA5NjgwMGNiODBlYzRlY2QzZjU3MjMxONTIAr8=: 00:21:17.045 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:17.045 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:17.045 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njg5OGZjNmQ0YzFmZTc4MWYyYzAwMDQ0NjYwNjhmMDiCYnr7: 00:21:17.045 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmE1ZTVhYTlkNDVhOTkyYmIyMjM5MjZkN2JiZDhhYjE0ODdjMGU4YjA5NjgwMGNiODBlYzRlY2QzZjU3MjMxONTIAr8=: ]] 00:21:17.045 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmE1ZTVhYTlkNDVhOTkyYmIyMjM5MjZkN2JiZDhhYjE0ODdjMGU4YjA5NjgwMGNiODBlYzRlY2QzZjU3MjMxONTIAr8=: 00:21:17.045 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:21:17.045 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:17.045 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:17.045 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:17.045 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:17.045 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:17.045 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:17.045 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.045 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.045 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.045 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:17.045 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:17.045 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:17.045 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:17.045 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:17.045 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:17.045 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:17.045 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:17.045 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:17.045 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:17.045 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:17.045 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.045 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.045 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.045 nvme0n1 00:21:17.045 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.045 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:17.045 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.045 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.045 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:17.045 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.304 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.304 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:17.304 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.304 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.304 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.304 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:17.304 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:21:17.304 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:17.304 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:17.304 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:17.304 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:17.304 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjBkODE4NDY4MDdhMzNmMmM5Y2M5NGY5YTcwZTQzNTRmNDgzYWQzNTNiOWQ3NmNiMjvVtg==: 00:21:17.304 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTFkNTVhZjQwZWM5ZmY4ODU2NzEyMWZmNTI5NDk1Mzg5Yjk0YmFkZmI4NDk1MTE1/ttDVg==: 00:21:17.304 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:17.304 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:17.304 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjBkODE4NDY4MDdhMzNmMmM5Y2M5NGY5YTcwZTQzNTRmNDgzYWQzNTNiOWQ3NmNiMjvVtg==: 00:21:17.304 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTFkNTVhZjQwZWM5ZmY4ODU2NzEyMWZmNTI5NDk1Mzg5Yjk0YmFkZmI4NDk1MTE1/ttDVg==: ]] 00:21:17.304 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTFkNTVhZjQwZWM5ZmY4ODU2NzEyMWZmNTI5NDk1Mzg5Yjk0YmFkZmI4NDk1MTE1/ttDVg==: 00:21:17.304 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:21:17.304 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:17.304 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:17.304 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:17.304 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:17.304 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:17.304 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:17.304 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.304 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.304 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.304 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:17.304 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:17.304 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:17.304 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:17.304 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:17.304 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:17.304 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:17.304 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:17.304 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:17.304 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:17.304 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:17.304 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.304 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.304 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.304 nvme0n1 00:21:17.304 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.304 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:17.304 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:17.304 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.304 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.304 09:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.304 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.304 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:17.304 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.304 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTRmOTYwOWNhNzRiYzhjOWNkZDRhYWZlNDA3ZjgxM2PFU+dB: 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDlkZjRlY2MyNjA5ZTRmNDA3ZjY3MzhiZTMwNmQzYjGgxGXD: 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTRmOTYwOWNhNzRiYzhjOWNkZDRhYWZlNDA3ZjgxM2PFU+dB: 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDlkZjRlY2MyNjA5ZTRmNDA3ZjY3MzhiZTMwNmQzYjGgxGXD: ]] 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDlkZjRlY2MyNjA5ZTRmNDA3ZjY3MzhiZTMwNmQzYjGgxGXD: 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.561 nvme0n1 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTMzZjNmNmVhYWIzYWI1NWIyNmRlMmY4ZGI4MjQ1ZWFmMTY5YTNjNGU2NjJiNDQ09flMPg==: 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmNlNzBmOTRiNzA2Zjg0M2MyNzYwYjdlMjY3OGQxM2KR8UvU: 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTMzZjNmNmVhYWIzYWI1NWIyNmRlMmY4ZGI4MjQ1ZWFmMTY5YTNjNGU2NjJiNDQ09flMPg==: 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmNlNzBmOTRiNzA2Zjg0M2MyNzYwYjdlMjY3OGQxM2KR8UvU: ]] 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmNlNzBmOTRiNzA2Zjg0M2MyNzYwYjdlMjY3OGQxM2KR8UvU: 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.561 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.819 nvme0n1 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmI2ZTIwZmYwNWZlNjUyNzJlZmVkMmUyMmQ4NDM3YTcxNWRhZTIzYTlmNjVhYTcwMmUzMzk2ZWVkYjdhMjMxY4TwsEM=: 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmI2ZTIwZmYwNWZlNjUyNzJlZmVkMmUyMmQ4NDM3YTcxNWRhZTIzYTlmNjVhYTcwMmUzMzk2ZWVkYjdhMjMxY4TwsEM=: 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.819 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.077 nvme0n1 00:21:18.077 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.077 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:18.077 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:18.077 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.077 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.077 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.077 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.077 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:18.077 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.077 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.077 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.077 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:18.077 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:18.077 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:21:18.077 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:18.077 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:18.077 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:18.077 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:18.077 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njg5OGZjNmQ0YzFmZTc4MWYyYzAwMDQ0NjYwNjhmMDiCYnr7: 00:21:18.077 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmE1ZTVhYTlkNDVhOTkyYmIyMjM5MjZkN2JiZDhhYjE0ODdjMGU4YjA5NjgwMGNiODBlYzRlY2QzZjU3MjMxONTIAr8=: 00:21:18.077 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:18.077 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:18.077 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njg5OGZjNmQ0YzFmZTc4MWYyYzAwMDQ0NjYwNjhmMDiCYnr7: 00:21:18.077 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmE1ZTVhYTlkNDVhOTkyYmIyMjM5MjZkN2JiZDhhYjE0ODdjMGU4YjA5NjgwMGNiODBlYzRlY2QzZjU3MjMxONTIAr8=: ]] 00:21:18.077 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmE1ZTVhYTlkNDVhOTkyYmIyMjM5MjZkN2JiZDhhYjE0ODdjMGU4YjA5NjgwMGNiODBlYzRlY2QzZjU3MjMxONTIAr8=: 00:21:18.077 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:21:18.077 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:18.077 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:18.077 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:18.077 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:18.077 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:18.077 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:18.077 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.077 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.077 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.077 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:18.077 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:18.077 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:18.077 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:18.077 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:18.077 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:18.077 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:18.077 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:18.077 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:18.077 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:18.077 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:18.077 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.077 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.077 09:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.642 nvme0n1 00:21:18.642 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.642 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:18.642 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.642 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:18.642 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.642 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.642 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.642 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:18.642 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.642 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.642 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.642 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:18.642 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:21:18.642 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:18.642 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:18.642 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:18.642 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:18.642 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjBkODE4NDY4MDdhMzNmMmM5Y2M5NGY5YTcwZTQzNTRmNDgzYWQzNTNiOWQ3NmNiMjvVtg==: 00:21:18.642 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTFkNTVhZjQwZWM5ZmY4ODU2NzEyMWZmNTI5NDk1Mzg5Yjk0YmFkZmI4NDk1MTE1/ttDVg==: 00:21:18.642 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:18.642 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:18.642 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjBkODE4NDY4MDdhMzNmMmM5Y2M5NGY5YTcwZTQzNTRmNDgzYWQzNTNiOWQ3NmNiMjvVtg==: 00:21:18.642 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTFkNTVhZjQwZWM5ZmY4ODU2NzEyMWZmNTI5NDk1Mzg5Yjk0YmFkZmI4NDk1MTE1/ttDVg==: ]] 00:21:18.642 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTFkNTVhZjQwZWM5ZmY4ODU2NzEyMWZmNTI5NDk1Mzg5Yjk0YmFkZmI4NDk1MTE1/ttDVg==: 00:21:18.642 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:21:18.642 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:18.642 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:18.642 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:18.642 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:18.642 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:18.642 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:18.642 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.642 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.642 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.642 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:18.642 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:18.642 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:18.642 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:18.642 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:18.642 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:18.642 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:18.642 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:18.642 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:18.642 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:18.642 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:18.642 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:18.642 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.642 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.901 nvme0n1 00:21:18.901 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.901 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:18.901 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.901 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:18.901 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.901 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.901 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.901 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:18.901 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.901 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.901 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.901 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:18.901 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:21:18.901 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:18.901 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:18.901 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:18.901 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:18.901 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTRmOTYwOWNhNzRiYzhjOWNkZDRhYWZlNDA3ZjgxM2PFU+dB: 00:21:18.901 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDlkZjRlY2MyNjA5ZTRmNDA3ZjY3MzhiZTMwNmQzYjGgxGXD: 00:21:18.901 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:18.901 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:18.901 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTRmOTYwOWNhNzRiYzhjOWNkZDRhYWZlNDA3ZjgxM2PFU+dB: 00:21:18.901 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDlkZjRlY2MyNjA5ZTRmNDA3ZjY3MzhiZTMwNmQzYjGgxGXD: ]] 00:21:18.901 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDlkZjRlY2MyNjA5ZTRmNDA3ZjY3MzhiZTMwNmQzYjGgxGXD: 00:21:18.901 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:21:18.901 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:18.901 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:18.901 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:18.901 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:18.901 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:18.901 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:18.901 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.901 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.901 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.901 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:18.901 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:18.901 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:18.901 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:18.901 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:18.901 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:18.901 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:18.901 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:18.901 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:18.901 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:18.901 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:18.902 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.902 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.902 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.159 nvme0n1 00:21:19.159 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.159 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:19.159 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.159 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:19.159 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.159 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.159 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.159 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:19.159 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.159 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.159 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.159 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:19.159 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:21:19.159 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:19.159 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:19.159 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:19.159 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:19.159 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTMzZjNmNmVhYWIzYWI1NWIyNmRlMmY4ZGI4MjQ1ZWFmMTY5YTNjNGU2NjJiNDQ09flMPg==: 00:21:19.159 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmNlNzBmOTRiNzA2Zjg0M2MyNzYwYjdlMjY3OGQxM2KR8UvU: 00:21:19.159 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:19.159 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:19.159 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTMzZjNmNmVhYWIzYWI1NWIyNmRlMmY4ZGI4MjQ1ZWFmMTY5YTNjNGU2NjJiNDQ09flMPg==: 00:21:19.159 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmNlNzBmOTRiNzA2Zjg0M2MyNzYwYjdlMjY3OGQxM2KR8UvU: ]] 00:21:19.159 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmNlNzBmOTRiNzA2Zjg0M2MyNzYwYjdlMjY3OGQxM2KR8UvU: 00:21:19.160 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:21:19.160 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:19.160 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:19.160 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:19.160 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:19.160 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:19.160 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:19.160 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.160 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.160 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.160 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:19.160 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:19.160 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:19.160 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:19.160 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:19.160 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:19.160 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:19.160 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:19.160 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:19.160 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:19.160 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:19.160 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:19.160 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.160 09:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.725 nvme0n1 00:21:19.725 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.725 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:19.725 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.725 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.725 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:19.725 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.725 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.725 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:19.725 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.725 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.725 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.725 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:19.725 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:21:19.725 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:19.725 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:19.725 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:19.725 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:19.725 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmI2ZTIwZmYwNWZlNjUyNzJlZmVkMmUyMmQ4NDM3YTcxNWRhZTIzYTlmNjVhYTcwMmUzMzk2ZWVkYjdhMjMxY4TwsEM=: 00:21:19.725 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:19.725 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:19.725 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:19.725 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmI2ZTIwZmYwNWZlNjUyNzJlZmVkMmUyMmQ4NDM3YTcxNWRhZTIzYTlmNjVhYTcwMmUzMzk2ZWVkYjdhMjMxY4TwsEM=: 00:21:19.725 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:19.725 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:21:19.725 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:19.725 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:19.725 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:19.725 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:19.725 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:19.725 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:19.725 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.725 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.725 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.725 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:19.725 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:19.725 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:19.725 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:19.725 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:19.725 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:19.725 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:19.725 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:19.725 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:19.725 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:19.725 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:19.725 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:19.725 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.725 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.983 nvme0n1 00:21:19.983 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.983 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:19.983 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:19.983 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.983 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.983 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.983 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.983 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:19.983 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.983 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.983 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.983 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:19.983 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:19.983 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:21:19.983 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:19.983 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:19.983 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:19.983 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:19.983 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njg5OGZjNmQ0YzFmZTc4MWYyYzAwMDQ0NjYwNjhmMDiCYnr7: 00:21:19.983 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmE1ZTVhYTlkNDVhOTkyYmIyMjM5MjZkN2JiZDhhYjE0ODdjMGU4YjA5NjgwMGNiODBlYzRlY2QzZjU3MjMxONTIAr8=: 00:21:19.983 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:19.983 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:19.983 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njg5OGZjNmQ0YzFmZTc4MWYyYzAwMDQ0NjYwNjhmMDiCYnr7: 00:21:19.983 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmE1ZTVhYTlkNDVhOTkyYmIyMjM5MjZkN2JiZDhhYjE0ODdjMGU4YjA5NjgwMGNiODBlYzRlY2QzZjU3MjMxONTIAr8=: ]] 00:21:19.983 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmE1ZTVhYTlkNDVhOTkyYmIyMjM5MjZkN2JiZDhhYjE0ODdjMGU4YjA5NjgwMGNiODBlYzRlY2QzZjU3MjMxONTIAr8=: 00:21:19.983 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:21:19.983 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:19.983 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:19.983 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:19.983 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:19.983 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:19.983 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:19.983 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.983 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.983 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.983 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:19.983 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:19.983 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:19.983 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:19.983 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:19.983 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:19.983 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:19.983 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:19.983 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:19.983 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:19.983 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:19.983 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.983 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.983 09:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:20.547 nvme0n1 00:21:20.547 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.547 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:20.547 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:20.547 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.547 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:20.547 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.547 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.547 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:20.547 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.547 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:20.547 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.547 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:20.547 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:21:20.547 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:20.547 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:20.547 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:20.547 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:20.547 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjBkODE4NDY4MDdhMzNmMmM5Y2M5NGY5YTcwZTQzNTRmNDgzYWQzNTNiOWQ3NmNiMjvVtg==: 00:21:20.547 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTFkNTVhZjQwZWM5ZmY4ODU2NzEyMWZmNTI5NDk1Mzg5Yjk0YmFkZmI4NDk1MTE1/ttDVg==: 00:21:20.547 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:20.547 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:20.547 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjBkODE4NDY4MDdhMzNmMmM5Y2M5NGY5YTcwZTQzNTRmNDgzYWQzNTNiOWQ3NmNiMjvVtg==: 00:21:20.547 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTFkNTVhZjQwZWM5ZmY4ODU2NzEyMWZmNTI5NDk1Mzg5Yjk0YmFkZmI4NDk1MTE1/ttDVg==: ]] 00:21:20.547 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTFkNTVhZjQwZWM5ZmY4ODU2NzEyMWZmNTI5NDk1Mzg5Yjk0YmFkZmI4NDk1MTE1/ttDVg==: 00:21:20.547 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:21:20.547 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:20.547 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:20.547 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:20.547 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:20.547 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:20.547 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:20.547 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.547 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:20.547 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.547 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:20.547 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:20.547 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:20.547 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:20.547 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:20.547 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:20.547 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:20.547 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:20.547 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:20.547 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:20.547 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:20.547 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.547 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.547 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.158 nvme0n1 00:21:21.158 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.158 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:21.158 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:21.158 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.158 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.158 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.158 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.158 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:21.158 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.158 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.158 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.158 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:21.158 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:21:21.158 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:21.158 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:21.158 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:21.158 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:21.158 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTRmOTYwOWNhNzRiYzhjOWNkZDRhYWZlNDA3ZjgxM2PFU+dB: 00:21:21.158 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDlkZjRlY2MyNjA5ZTRmNDA3ZjY3MzhiZTMwNmQzYjGgxGXD: 00:21:21.158 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:21.158 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:21.158 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTRmOTYwOWNhNzRiYzhjOWNkZDRhYWZlNDA3ZjgxM2PFU+dB: 00:21:21.158 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDlkZjRlY2MyNjA5ZTRmNDA3ZjY3MzhiZTMwNmQzYjGgxGXD: ]] 00:21:21.158 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDlkZjRlY2MyNjA5ZTRmNDA3ZjY3MzhiZTMwNmQzYjGgxGXD: 00:21:21.158 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:21:21.158 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:21.158 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:21.158 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:21.158 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:21.158 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:21.158 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:21.158 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.158 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.158 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.158 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:21.158 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:21.158 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:21.158 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:21.158 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:21.158 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:21.158 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:21.158 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:21.158 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:21.158 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:21.158 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:21.158 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:21.158 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.158 09:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.723 nvme0n1 00:21:21.723 09:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.723 09:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:21.723 09:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.723 09:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:21.723 09:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.723 09:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.723 09:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.723 09:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:21.723 09:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.723 09:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.723 09:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.723 09:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:21.723 09:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:21:21.723 09:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:21.723 09:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:21.723 09:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:21.723 09:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:21.723 09:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTMzZjNmNmVhYWIzYWI1NWIyNmRlMmY4ZGI4MjQ1ZWFmMTY5YTNjNGU2NjJiNDQ09flMPg==: 00:21:21.723 09:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmNlNzBmOTRiNzA2Zjg0M2MyNzYwYjdlMjY3OGQxM2KR8UvU: 00:21:21.723 09:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:21.723 09:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:21.723 09:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTMzZjNmNmVhYWIzYWI1NWIyNmRlMmY4ZGI4MjQ1ZWFmMTY5YTNjNGU2NjJiNDQ09flMPg==: 00:21:21.723 09:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmNlNzBmOTRiNzA2Zjg0M2MyNzYwYjdlMjY3OGQxM2KR8UvU: ]] 00:21:21.723 09:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmNlNzBmOTRiNzA2Zjg0M2MyNzYwYjdlMjY3OGQxM2KR8UvU: 00:21:21.723 09:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:21:21.723 09:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:21.723 09:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:21.723 09:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:21.723 09:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:21.723 09:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:21.723 09:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:21.723 09:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.723 09:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.723 09:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.980 09:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:21.980 09:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:21.980 09:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:21.980 09:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:21.980 09:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:21.980 09:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:21.980 09:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:21.980 09:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:21.980 09:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:21.980 09:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:21.980 09:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:21.980 09:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:21.980 09:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.980 09:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.545 nvme0n1 00:21:22.545 09:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.545 09:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:22.545 09:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:22.545 09:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.545 09:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.545 09:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.545 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.545 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:22.545 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.545 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.545 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.545 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:22.545 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:21:22.545 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:22.545 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:22.545 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:22.545 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:22.545 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmI2ZTIwZmYwNWZlNjUyNzJlZmVkMmUyMmQ4NDM3YTcxNWRhZTIzYTlmNjVhYTcwMmUzMzk2ZWVkYjdhMjMxY4TwsEM=: 00:21:22.545 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:22.545 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:22.545 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:22.545 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmI2ZTIwZmYwNWZlNjUyNzJlZmVkMmUyMmQ4NDM3YTcxNWRhZTIzYTlmNjVhYTcwMmUzMzk2ZWVkYjdhMjMxY4TwsEM=: 00:21:22.545 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:22.545 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:21:22.545 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:22.545 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:22.545 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:22.546 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:22.546 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:22.546 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:22.546 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.546 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.546 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.546 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:22.546 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:22.546 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:22.546 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:22.546 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:22.546 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:22.546 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:22.546 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:22.546 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:22.546 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:22.546 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:22.546 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:22.546 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.546 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.112 nvme0n1 00:21:23.112 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.112 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:23.112 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:23.112 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.112 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.112 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.112 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.112 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:23.112 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.112 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.112 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.112 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:21:23.112 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:23.112 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:23.112 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:21:23.112 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:23.112 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:23.112 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:23.112 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:23.112 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njg5OGZjNmQ0YzFmZTc4MWYyYzAwMDQ0NjYwNjhmMDiCYnr7: 00:21:23.112 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmE1ZTVhYTlkNDVhOTkyYmIyMjM5MjZkN2JiZDhhYjE0ODdjMGU4YjA5NjgwMGNiODBlYzRlY2QzZjU3MjMxONTIAr8=: 00:21:23.112 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:23.112 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:23.112 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njg5OGZjNmQ0YzFmZTc4MWYyYzAwMDQ0NjYwNjhmMDiCYnr7: 00:21:23.112 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmE1ZTVhYTlkNDVhOTkyYmIyMjM5MjZkN2JiZDhhYjE0ODdjMGU4YjA5NjgwMGNiODBlYzRlY2QzZjU3MjMxONTIAr8=: ]] 00:21:23.112 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmE1ZTVhYTlkNDVhOTkyYmIyMjM5MjZkN2JiZDhhYjE0ODdjMGU4YjA5NjgwMGNiODBlYzRlY2QzZjU3MjMxONTIAr8=: 00:21:23.112 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:21:23.112 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:23.112 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:23.112 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:23.113 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:23.113 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:23.113 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:23.113 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.113 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.113 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.113 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:23.113 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:23.113 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:23.113 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:23.113 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:23.113 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:23.113 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:23.113 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:23.113 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:23.113 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:23.113 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:23.113 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.113 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.113 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.113 nvme0n1 00:21:23.113 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.113 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:23.113 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:23.113 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.113 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.113 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.113 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.113 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:23.113 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.113 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.373 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.373 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:23.373 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:21:23.373 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:23.373 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:23.373 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:23.373 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:23.373 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjBkODE4NDY4MDdhMzNmMmM5Y2M5NGY5YTcwZTQzNTRmNDgzYWQzNTNiOWQ3NmNiMjvVtg==: 00:21:23.373 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTFkNTVhZjQwZWM5ZmY4ODU2NzEyMWZmNTI5NDk1Mzg5Yjk0YmFkZmI4NDk1MTE1/ttDVg==: 00:21:23.373 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:23.373 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:23.373 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjBkODE4NDY4MDdhMzNmMmM5Y2M5NGY5YTcwZTQzNTRmNDgzYWQzNTNiOWQ3NmNiMjvVtg==: 00:21:23.373 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTFkNTVhZjQwZWM5ZmY4ODU2NzEyMWZmNTI5NDk1Mzg5Yjk0YmFkZmI4NDk1MTE1/ttDVg==: ]] 00:21:23.373 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTFkNTVhZjQwZWM5ZmY4ODU2NzEyMWZmNTI5NDk1Mzg5Yjk0YmFkZmI4NDk1MTE1/ttDVg==: 00:21:23.373 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:21:23.373 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:23.373 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:23.373 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:23.373 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:23.373 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:23.373 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:23.373 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.373 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.373 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.373 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:23.373 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:23.373 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:23.373 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:23.373 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:23.373 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:23.373 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:23.373 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:23.373 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:23.373 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:23.373 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:23.374 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.374 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.374 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.374 nvme0n1 00:21:23.374 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.374 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:23.374 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:23.374 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.374 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.374 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.374 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.374 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:23.374 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.374 09:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.374 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.374 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:23.374 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:21:23.374 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:23.374 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:23.374 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:23.374 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:23.374 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTRmOTYwOWNhNzRiYzhjOWNkZDRhYWZlNDA3ZjgxM2PFU+dB: 00:21:23.374 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDlkZjRlY2MyNjA5ZTRmNDA3ZjY3MzhiZTMwNmQzYjGgxGXD: 00:21:23.374 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:23.374 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:23.374 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTRmOTYwOWNhNzRiYzhjOWNkZDRhYWZlNDA3ZjgxM2PFU+dB: 00:21:23.374 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDlkZjRlY2MyNjA5ZTRmNDA3ZjY3MzhiZTMwNmQzYjGgxGXD: ]] 00:21:23.374 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDlkZjRlY2MyNjA5ZTRmNDA3ZjY3MzhiZTMwNmQzYjGgxGXD: 00:21:23.374 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:21:23.374 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:23.374 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:23.374 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:23.374 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:23.374 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:23.374 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:23.374 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.374 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.374 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.374 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:23.374 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:23.374 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:23.374 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:23.374 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:23.374 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:23.374 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:23.374 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:23.374 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:23.374 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:23.374 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:23.374 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.374 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.374 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.634 nvme0n1 00:21:23.634 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.634 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:23.634 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:23.634 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.634 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.634 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.634 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.634 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:23.634 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.634 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.634 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.634 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:23.634 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:21:23.634 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:23.634 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:23.634 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:23.634 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:23.634 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTMzZjNmNmVhYWIzYWI1NWIyNmRlMmY4ZGI4MjQ1ZWFmMTY5YTNjNGU2NjJiNDQ09flMPg==: 00:21:23.634 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmNlNzBmOTRiNzA2Zjg0M2MyNzYwYjdlMjY3OGQxM2KR8UvU: 00:21:23.634 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:23.634 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:23.634 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTMzZjNmNmVhYWIzYWI1NWIyNmRlMmY4ZGI4MjQ1ZWFmMTY5YTNjNGU2NjJiNDQ09flMPg==: 00:21:23.634 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmNlNzBmOTRiNzA2Zjg0M2MyNzYwYjdlMjY3OGQxM2KR8UvU: ]] 00:21:23.634 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmNlNzBmOTRiNzA2Zjg0M2MyNzYwYjdlMjY3OGQxM2KR8UvU: 00:21:23.634 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:21:23.634 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:23.634 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:23.634 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:23.634 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:23.634 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:23.634 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:23.634 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.634 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.634 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.634 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:23.634 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:23.634 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:23.634 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:23.635 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:23.635 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:23.635 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:23.635 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:23.635 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:23.635 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:23.635 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:23.635 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:23.635 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.635 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.635 nvme0n1 00:21:23.635 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.635 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:23.635 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.635 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:23.635 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.635 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.635 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.635 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:23.635 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.635 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.635 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.635 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:23.635 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:21:23.635 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:23.635 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:23.635 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:23.635 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:23.635 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmI2ZTIwZmYwNWZlNjUyNzJlZmVkMmUyMmQ4NDM3YTcxNWRhZTIzYTlmNjVhYTcwMmUzMzk2ZWVkYjdhMjMxY4TwsEM=: 00:21:23.635 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:23.635 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:23.635 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:23.635 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmI2ZTIwZmYwNWZlNjUyNzJlZmVkMmUyMmQ4NDM3YTcxNWRhZTIzYTlmNjVhYTcwMmUzMzk2ZWVkYjdhMjMxY4TwsEM=: 00:21:23.635 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:23.635 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:21:23.635 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:23.635 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:23.635 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:23.635 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:23.635 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:23.635 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:23.635 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.635 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.894 nvme0n1 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njg5OGZjNmQ0YzFmZTc4MWYyYzAwMDQ0NjYwNjhmMDiCYnr7: 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmE1ZTVhYTlkNDVhOTkyYmIyMjM5MjZkN2JiZDhhYjE0ODdjMGU4YjA5NjgwMGNiODBlYzRlY2QzZjU3MjMxONTIAr8=: 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njg5OGZjNmQ0YzFmZTc4MWYyYzAwMDQ0NjYwNjhmMDiCYnr7: 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmE1ZTVhYTlkNDVhOTkyYmIyMjM5MjZkN2JiZDhhYjE0ODdjMGU4YjA5NjgwMGNiODBlYzRlY2QzZjU3MjMxONTIAr8=: ]] 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmE1ZTVhYTlkNDVhOTkyYmIyMjM5MjZkN2JiZDhhYjE0ODdjMGU4YjA5NjgwMGNiODBlYzRlY2QzZjU3MjMxONTIAr8=: 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:23.894 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.895 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.895 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.154 nvme0n1 00:21:24.154 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.154 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:24.154 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:24.154 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.154 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.154 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.154 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.154 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:24.154 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.154 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.154 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.154 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:24.154 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:21:24.154 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:24.154 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:24.154 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:24.154 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:24.154 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjBkODE4NDY4MDdhMzNmMmM5Y2M5NGY5YTcwZTQzNTRmNDgzYWQzNTNiOWQ3NmNiMjvVtg==: 00:21:24.154 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTFkNTVhZjQwZWM5ZmY4ODU2NzEyMWZmNTI5NDk1Mzg5Yjk0YmFkZmI4NDk1MTE1/ttDVg==: 00:21:24.154 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:24.154 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:24.154 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjBkODE4NDY4MDdhMzNmMmM5Y2M5NGY5YTcwZTQzNTRmNDgzYWQzNTNiOWQ3NmNiMjvVtg==: 00:21:24.154 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTFkNTVhZjQwZWM5ZmY4ODU2NzEyMWZmNTI5NDk1Mzg5Yjk0YmFkZmI4NDk1MTE1/ttDVg==: ]] 00:21:24.154 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTFkNTVhZjQwZWM5ZmY4ODU2NzEyMWZmNTI5NDk1Mzg5Yjk0YmFkZmI4NDk1MTE1/ttDVg==: 00:21:24.154 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:21:24.154 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:24.154 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:24.154 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:24.154 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:24.154 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:24.154 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:24.154 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.154 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.154 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.154 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:24.154 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:24.154 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:24.154 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:24.154 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:24.154 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:24.154 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:24.154 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:24.154 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:24.154 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:24.154 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:24.154 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.154 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.154 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.154 nvme0n1 00:21:24.154 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.154 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:24.154 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:24.154 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.154 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.154 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.412 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.412 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:24.412 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.412 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.412 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.412 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:24.412 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:21:24.412 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:24.412 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:24.412 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:24.412 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:24.412 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTRmOTYwOWNhNzRiYzhjOWNkZDRhYWZlNDA3ZjgxM2PFU+dB: 00:21:24.412 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDlkZjRlY2MyNjA5ZTRmNDA3ZjY3MzhiZTMwNmQzYjGgxGXD: 00:21:24.412 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:24.412 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:24.412 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTRmOTYwOWNhNzRiYzhjOWNkZDRhYWZlNDA3ZjgxM2PFU+dB: 00:21:24.412 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDlkZjRlY2MyNjA5ZTRmNDA3ZjY3MzhiZTMwNmQzYjGgxGXD: ]] 00:21:24.412 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDlkZjRlY2MyNjA5ZTRmNDA3ZjY3MzhiZTMwNmQzYjGgxGXD: 00:21:24.412 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:21:24.412 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:24.412 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:24.412 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:24.412 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:24.412 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:24.412 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:24.412 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.412 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.412 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.412 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:24.412 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:24.412 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:24.412 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:24.412 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:24.412 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:24.412 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:24.412 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:24.412 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:24.412 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:24.412 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:24.412 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.412 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.412 09:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.412 nvme0n1 00:21:24.412 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.412 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:24.412 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.412 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:24.412 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.412 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.412 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.412 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:24.412 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.412 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.412 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.413 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:24.413 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:21:24.413 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:24.413 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:24.413 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:24.413 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:24.413 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTMzZjNmNmVhYWIzYWI1NWIyNmRlMmY4ZGI4MjQ1ZWFmMTY5YTNjNGU2NjJiNDQ09flMPg==: 00:21:24.413 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmNlNzBmOTRiNzA2Zjg0M2MyNzYwYjdlMjY3OGQxM2KR8UvU: 00:21:24.413 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:24.413 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:24.413 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTMzZjNmNmVhYWIzYWI1NWIyNmRlMmY4ZGI4MjQ1ZWFmMTY5YTNjNGU2NjJiNDQ09flMPg==: 00:21:24.413 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmNlNzBmOTRiNzA2Zjg0M2MyNzYwYjdlMjY3OGQxM2KR8UvU: ]] 00:21:24.413 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmNlNzBmOTRiNzA2Zjg0M2MyNzYwYjdlMjY3OGQxM2KR8UvU: 00:21:24.413 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:21:24.413 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:24.413 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:24.413 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:24.413 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:24.413 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:24.413 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:24.413 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.413 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.413 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.413 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:24.413 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:24.413 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:24.413 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:24.413 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:24.413 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:24.413 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:24.413 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:24.413 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:24.413 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:24.413 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:24.413 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:24.413 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.413 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.672 nvme0n1 00:21:24.672 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.672 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:24.672 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.672 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:24.672 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.672 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.672 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.672 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:24.672 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.672 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.672 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.672 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:24.672 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:21:24.672 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:24.672 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:24.672 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:24.672 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:24.672 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmI2ZTIwZmYwNWZlNjUyNzJlZmVkMmUyMmQ4NDM3YTcxNWRhZTIzYTlmNjVhYTcwMmUzMzk2ZWVkYjdhMjMxY4TwsEM=: 00:21:24.672 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:24.672 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:24.672 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:24.672 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmI2ZTIwZmYwNWZlNjUyNzJlZmVkMmUyMmQ4NDM3YTcxNWRhZTIzYTlmNjVhYTcwMmUzMzk2ZWVkYjdhMjMxY4TwsEM=: 00:21:24.672 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:24.672 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:21:24.672 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:24.672 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:24.672 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:24.672 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:24.672 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:24.672 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:24.672 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.672 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.672 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.672 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:24.672 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:24.672 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:24.672 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:24.672 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:24.672 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:24.672 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:24.672 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:24.672 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:24.672 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:24.672 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:24.672 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:24.672 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.672 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.931 nvme0n1 00:21:24.931 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.931 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:24.931 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:24.931 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.931 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.931 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.931 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.931 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:24.931 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.931 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.931 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.931 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:24.931 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:24.931 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:21:24.931 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:24.931 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:24.931 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:24.931 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:24.931 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njg5OGZjNmQ0YzFmZTc4MWYyYzAwMDQ0NjYwNjhmMDiCYnr7: 00:21:24.931 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmE1ZTVhYTlkNDVhOTkyYmIyMjM5MjZkN2JiZDhhYjE0ODdjMGU4YjA5NjgwMGNiODBlYzRlY2QzZjU3MjMxONTIAr8=: 00:21:24.931 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:24.931 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:24.931 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njg5OGZjNmQ0YzFmZTc4MWYyYzAwMDQ0NjYwNjhmMDiCYnr7: 00:21:24.931 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmE1ZTVhYTlkNDVhOTkyYmIyMjM5MjZkN2JiZDhhYjE0ODdjMGU4YjA5NjgwMGNiODBlYzRlY2QzZjU3MjMxONTIAr8=: ]] 00:21:24.931 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmE1ZTVhYTlkNDVhOTkyYmIyMjM5MjZkN2JiZDhhYjE0ODdjMGU4YjA5NjgwMGNiODBlYzRlY2QzZjU3MjMxONTIAr8=: 00:21:24.931 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:21:24.931 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:24.931 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:24.931 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:24.931 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:24.931 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:24.931 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:24.931 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.931 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.931 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.931 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:24.931 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:24.931 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:24.931 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:24.931 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:24.932 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:24.932 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:24.932 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:24.932 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:24.932 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:24.932 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:24.932 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:24.932 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.932 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.190 nvme0n1 00:21:25.190 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.190 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:25.190 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.190 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:25.191 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.191 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.191 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.191 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:25.191 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.191 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.191 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.191 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:25.191 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:21:25.191 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:25.191 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:25.191 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:25.191 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:25.191 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjBkODE4NDY4MDdhMzNmMmM5Y2M5NGY5YTcwZTQzNTRmNDgzYWQzNTNiOWQ3NmNiMjvVtg==: 00:21:25.191 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTFkNTVhZjQwZWM5ZmY4ODU2NzEyMWZmNTI5NDk1Mzg5Yjk0YmFkZmI4NDk1MTE1/ttDVg==: 00:21:25.191 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:25.191 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:25.191 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjBkODE4NDY4MDdhMzNmMmM5Y2M5NGY5YTcwZTQzNTRmNDgzYWQzNTNiOWQ3NmNiMjvVtg==: 00:21:25.191 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTFkNTVhZjQwZWM5ZmY4ODU2NzEyMWZmNTI5NDk1Mzg5Yjk0YmFkZmI4NDk1MTE1/ttDVg==: ]] 00:21:25.191 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTFkNTVhZjQwZWM5ZmY4ODU2NzEyMWZmNTI5NDk1Mzg5Yjk0YmFkZmI4NDk1MTE1/ttDVg==: 00:21:25.191 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:21:25.191 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:25.191 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:25.191 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:25.191 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:25.191 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:25.191 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:25.191 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.191 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.191 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.191 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:25.191 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:25.191 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:25.191 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:25.191 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:25.191 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:25.191 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:25.191 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:25.191 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:25.191 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:25.191 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:25.191 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.191 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.191 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.450 nvme0n1 00:21:25.450 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.450 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:25.450 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.450 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:25.450 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.450 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.450 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.450 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:25.450 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.450 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.450 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.450 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:25.450 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:21:25.450 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:25.450 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:25.450 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:25.450 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:25.450 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTRmOTYwOWNhNzRiYzhjOWNkZDRhYWZlNDA3ZjgxM2PFU+dB: 00:21:25.450 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDlkZjRlY2MyNjA5ZTRmNDA3ZjY3MzhiZTMwNmQzYjGgxGXD: 00:21:25.450 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:25.450 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:25.450 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTRmOTYwOWNhNzRiYzhjOWNkZDRhYWZlNDA3ZjgxM2PFU+dB: 00:21:25.450 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDlkZjRlY2MyNjA5ZTRmNDA3ZjY3MzhiZTMwNmQzYjGgxGXD: ]] 00:21:25.450 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDlkZjRlY2MyNjA5ZTRmNDA3ZjY3MzhiZTMwNmQzYjGgxGXD: 00:21:25.450 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:21:25.450 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:25.451 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:25.451 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:25.451 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:25.451 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:25.451 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:25.451 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.451 09:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.451 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.451 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:25.451 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:25.451 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:25.451 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:25.451 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:25.451 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:25.451 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:25.451 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:25.451 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:25.451 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:25.451 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:25.451 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.451 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.451 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.710 nvme0n1 00:21:25.710 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.710 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:25.710 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.710 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:25.710 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.710 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.710 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.710 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:25.710 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.710 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.710 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.710 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:25.710 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:21:25.710 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:25.710 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:25.710 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:25.710 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:25.710 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTMzZjNmNmVhYWIzYWI1NWIyNmRlMmY4ZGI4MjQ1ZWFmMTY5YTNjNGU2NjJiNDQ09flMPg==: 00:21:25.710 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmNlNzBmOTRiNzA2Zjg0M2MyNzYwYjdlMjY3OGQxM2KR8UvU: 00:21:25.710 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:25.710 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:25.710 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTMzZjNmNmVhYWIzYWI1NWIyNmRlMmY4ZGI4MjQ1ZWFmMTY5YTNjNGU2NjJiNDQ09flMPg==: 00:21:25.710 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmNlNzBmOTRiNzA2Zjg0M2MyNzYwYjdlMjY3OGQxM2KR8UvU: ]] 00:21:25.710 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmNlNzBmOTRiNzA2Zjg0M2MyNzYwYjdlMjY3OGQxM2KR8UvU: 00:21:25.710 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:21:25.710 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:25.710 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:25.710 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:25.710 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:25.710 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:25.710 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:25.710 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.710 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.710 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.710 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:25.710 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:25.710 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:25.710 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:25.710 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:25.710 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:25.710 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:25.710 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:25.710 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:25.710 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:25.710 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:25.710 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:25.710 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.710 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.710 nvme0n1 00:21:25.710 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.710 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:25.710 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:25.710 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.970 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.970 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.970 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.970 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:25.970 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.970 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.970 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.970 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:25.970 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:21:25.970 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:25.970 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:25.970 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:25.970 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:25.970 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmI2ZTIwZmYwNWZlNjUyNzJlZmVkMmUyMmQ4NDM3YTcxNWRhZTIzYTlmNjVhYTcwMmUzMzk2ZWVkYjdhMjMxY4TwsEM=: 00:21:25.970 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:25.970 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:25.970 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:25.970 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmI2ZTIwZmYwNWZlNjUyNzJlZmVkMmUyMmQ4NDM3YTcxNWRhZTIzYTlmNjVhYTcwMmUzMzk2ZWVkYjdhMjMxY4TwsEM=: 00:21:25.970 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:25.970 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:21:25.970 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:25.970 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:25.970 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:25.970 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:25.970 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:25.970 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:25.970 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.970 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.970 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.970 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:25.970 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:25.970 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:25.970 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:25.970 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:25.970 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:25.970 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:25.970 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:25.970 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:25.970 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:25.970 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:25.970 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:25.970 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.970 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.970 nvme0n1 00:21:25.970 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.970 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:25.970 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.970 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:25.970 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.230 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.230 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.230 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:26.230 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.230 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.230 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.230 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:26.230 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:26.230 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:21:26.230 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:26.230 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:26.230 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:26.230 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:26.230 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njg5OGZjNmQ0YzFmZTc4MWYyYzAwMDQ0NjYwNjhmMDiCYnr7: 00:21:26.230 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmE1ZTVhYTlkNDVhOTkyYmIyMjM5MjZkN2JiZDhhYjE0ODdjMGU4YjA5NjgwMGNiODBlYzRlY2QzZjU3MjMxONTIAr8=: 00:21:26.230 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:26.230 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:26.230 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njg5OGZjNmQ0YzFmZTc4MWYyYzAwMDQ0NjYwNjhmMDiCYnr7: 00:21:26.230 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmE1ZTVhYTlkNDVhOTkyYmIyMjM5MjZkN2JiZDhhYjE0ODdjMGU4YjA5NjgwMGNiODBlYzRlY2QzZjU3MjMxONTIAr8=: ]] 00:21:26.230 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmE1ZTVhYTlkNDVhOTkyYmIyMjM5MjZkN2JiZDhhYjE0ODdjMGU4YjA5NjgwMGNiODBlYzRlY2QzZjU3MjMxONTIAr8=: 00:21:26.230 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:21:26.230 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:26.230 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:26.230 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:26.230 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:26.230 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:26.230 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:26.230 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.230 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.230 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.230 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:26.230 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:26.230 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:26.230 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:26.230 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:26.230 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:26.230 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:26.230 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:26.230 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:26.230 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:26.230 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:26.230 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.230 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.230 09:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.490 nvme0n1 00:21:26.490 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.490 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:26.490 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:26.490 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.490 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.490 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.490 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.490 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:26.490 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.490 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.490 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.490 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:26.490 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:21:26.490 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:26.490 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:26.490 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:26.490 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:26.490 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjBkODE4NDY4MDdhMzNmMmM5Y2M5NGY5YTcwZTQzNTRmNDgzYWQzNTNiOWQ3NmNiMjvVtg==: 00:21:26.490 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTFkNTVhZjQwZWM5ZmY4ODU2NzEyMWZmNTI5NDk1Mzg5Yjk0YmFkZmI4NDk1MTE1/ttDVg==: 00:21:26.490 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:26.490 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:26.490 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjBkODE4NDY4MDdhMzNmMmM5Y2M5NGY5YTcwZTQzNTRmNDgzYWQzNTNiOWQ3NmNiMjvVtg==: 00:21:26.490 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTFkNTVhZjQwZWM5ZmY4ODU2NzEyMWZmNTI5NDk1Mzg5Yjk0YmFkZmI4NDk1MTE1/ttDVg==: ]] 00:21:26.490 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTFkNTVhZjQwZWM5ZmY4ODU2NzEyMWZmNTI5NDk1Mzg5Yjk0YmFkZmI4NDk1MTE1/ttDVg==: 00:21:26.490 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:21:26.490 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:26.490 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:26.490 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:26.490 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:26.490 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:26.490 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:26.490 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.490 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.490 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.490 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:26.490 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:26.490 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:26.490 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:26.490 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:26.490 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:26.490 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:26.490 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:26.490 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:26.490 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:26.490 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:26.490 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.490 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.490 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.750 nvme0n1 00:21:26.750 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.750 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:26.750 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:26.750 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.750 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.750 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.009 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.009 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:27.009 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.009 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.009 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.009 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:27.009 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:21:27.009 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:27.009 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:27.009 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:27.009 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:27.009 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTRmOTYwOWNhNzRiYzhjOWNkZDRhYWZlNDA3ZjgxM2PFU+dB: 00:21:27.009 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDlkZjRlY2MyNjA5ZTRmNDA3ZjY3MzhiZTMwNmQzYjGgxGXD: 00:21:27.009 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:27.009 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:27.009 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTRmOTYwOWNhNzRiYzhjOWNkZDRhYWZlNDA3ZjgxM2PFU+dB: 00:21:27.010 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDlkZjRlY2MyNjA5ZTRmNDA3ZjY3MzhiZTMwNmQzYjGgxGXD: ]] 00:21:27.010 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDlkZjRlY2MyNjA5ZTRmNDA3ZjY3MzhiZTMwNmQzYjGgxGXD: 00:21:27.010 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:21:27.010 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:27.010 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:27.010 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:27.010 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:27.010 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:27.010 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:27.010 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.010 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.010 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.010 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:27.010 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:27.010 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:27.010 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:27.010 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:27.010 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:27.010 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:27.010 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:27.010 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:27.010 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:27.010 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:27.010 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:27.010 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.010 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.287 nvme0n1 00:21:27.287 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.287 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:27.287 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:27.287 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.287 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.287 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.287 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.287 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:27.287 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.287 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.287 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.287 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:27.287 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:21:27.288 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:27.288 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:27.288 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:27.288 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:27.288 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTMzZjNmNmVhYWIzYWI1NWIyNmRlMmY4ZGI4MjQ1ZWFmMTY5YTNjNGU2NjJiNDQ09flMPg==: 00:21:27.288 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmNlNzBmOTRiNzA2Zjg0M2MyNzYwYjdlMjY3OGQxM2KR8UvU: 00:21:27.288 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:27.288 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:27.288 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTMzZjNmNmVhYWIzYWI1NWIyNmRlMmY4ZGI4MjQ1ZWFmMTY5YTNjNGU2NjJiNDQ09flMPg==: 00:21:27.288 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmNlNzBmOTRiNzA2Zjg0M2MyNzYwYjdlMjY3OGQxM2KR8UvU: ]] 00:21:27.288 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmNlNzBmOTRiNzA2Zjg0M2MyNzYwYjdlMjY3OGQxM2KR8UvU: 00:21:27.288 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:21:27.288 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:27.288 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:27.288 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:27.288 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:27.288 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:27.288 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:27.288 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.288 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.288 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.288 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:27.288 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:27.288 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:27.288 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:27.288 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:27.288 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:27.288 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:27.288 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:27.288 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:27.288 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:27.288 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:27.288 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:27.288 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.288 09:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.854 nvme0n1 00:21:27.854 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.854 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:27.854 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:27.854 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.854 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.854 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.854 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.854 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:27.854 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.854 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.854 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.854 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:27.854 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:21:27.854 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:27.854 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:27.854 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:27.854 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:27.854 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmI2ZTIwZmYwNWZlNjUyNzJlZmVkMmUyMmQ4NDM3YTcxNWRhZTIzYTlmNjVhYTcwMmUzMzk2ZWVkYjdhMjMxY4TwsEM=: 00:21:27.854 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:27.854 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:27.854 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:27.854 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmI2ZTIwZmYwNWZlNjUyNzJlZmVkMmUyMmQ4NDM3YTcxNWRhZTIzYTlmNjVhYTcwMmUzMzk2ZWVkYjdhMjMxY4TwsEM=: 00:21:27.854 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:27.854 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:21:27.854 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:27.854 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:27.854 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:27.854 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:27.854 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:27.854 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:27.854 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.854 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.854 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.854 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:27.854 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:27.854 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:27.854 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:27.854 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:27.854 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:27.854 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:27.854 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:27.854 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:27.854 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:27.854 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:27.854 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:27.854 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.854 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.113 nvme0n1 00:21:28.113 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.113 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:28.113 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:28.113 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.113 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.113 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.113 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.113 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:28.113 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.113 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.113 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.113 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:28.113 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:28.113 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:21:28.113 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:28.113 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:28.113 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:28.113 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:28.113 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njg5OGZjNmQ0YzFmZTc4MWYyYzAwMDQ0NjYwNjhmMDiCYnr7: 00:21:28.113 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmE1ZTVhYTlkNDVhOTkyYmIyMjM5MjZkN2JiZDhhYjE0ODdjMGU4YjA5NjgwMGNiODBlYzRlY2QzZjU3MjMxONTIAr8=: 00:21:28.113 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:28.113 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:28.113 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njg5OGZjNmQ0YzFmZTc4MWYyYzAwMDQ0NjYwNjhmMDiCYnr7: 00:21:28.113 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmE1ZTVhYTlkNDVhOTkyYmIyMjM5MjZkN2JiZDhhYjE0ODdjMGU4YjA5NjgwMGNiODBlYzRlY2QzZjU3MjMxONTIAr8=: ]] 00:21:28.113 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmE1ZTVhYTlkNDVhOTkyYmIyMjM5MjZkN2JiZDhhYjE0ODdjMGU4YjA5NjgwMGNiODBlYzRlY2QzZjU3MjMxONTIAr8=: 00:21:28.113 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:21:28.113 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:28.113 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:28.113 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:28.113 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:28.113 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:28.113 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:28.113 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.114 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.114 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.114 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:28.114 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:28.114 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:28.114 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:28.114 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:28.114 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:28.114 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:28.114 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:28.114 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:28.114 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:28.114 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:28.114 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.114 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.114 09:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.681 nvme0n1 00:21:28.681 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.681 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:28.681 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:28.681 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.681 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.681 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.681 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.681 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:28.681 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.681 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.681 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.681 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:28.681 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:21:28.681 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:28.681 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:28.681 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:28.681 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:28.681 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjBkODE4NDY4MDdhMzNmMmM5Y2M5NGY5YTcwZTQzNTRmNDgzYWQzNTNiOWQ3NmNiMjvVtg==: 00:21:28.681 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTFkNTVhZjQwZWM5ZmY4ODU2NzEyMWZmNTI5NDk1Mzg5Yjk0YmFkZmI4NDk1MTE1/ttDVg==: 00:21:28.681 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:28.681 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:28.681 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjBkODE4NDY4MDdhMzNmMmM5Y2M5NGY5YTcwZTQzNTRmNDgzYWQzNTNiOWQ3NmNiMjvVtg==: 00:21:28.681 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTFkNTVhZjQwZWM5ZmY4ODU2NzEyMWZmNTI5NDk1Mzg5Yjk0YmFkZmI4NDk1MTE1/ttDVg==: ]] 00:21:28.681 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTFkNTVhZjQwZWM5ZmY4ODU2NzEyMWZmNTI5NDk1Mzg5Yjk0YmFkZmI4NDk1MTE1/ttDVg==: 00:21:28.681 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:21:28.682 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:28.682 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:28.682 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:28.682 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:28.682 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:28.682 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:28.682 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.682 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.682 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.682 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:28.682 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:28.682 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:28.682 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:28.682 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:28.682 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:28.682 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:28.682 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:28.682 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:28.682 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:28.682 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:28.682 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.682 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.682 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.258 nvme0n1 00:21:29.258 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.258 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:29.258 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.258 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:29.258 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.258 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.258 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.258 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:29.258 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.258 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.258 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.258 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:29.258 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:21:29.258 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:29.258 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:29.258 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:29.258 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:29.258 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTRmOTYwOWNhNzRiYzhjOWNkZDRhYWZlNDA3ZjgxM2PFU+dB: 00:21:29.258 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDlkZjRlY2MyNjA5ZTRmNDA3ZjY3MzhiZTMwNmQzYjGgxGXD: 00:21:29.258 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:29.258 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:29.258 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTRmOTYwOWNhNzRiYzhjOWNkZDRhYWZlNDA3ZjgxM2PFU+dB: 00:21:29.556 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDlkZjRlY2MyNjA5ZTRmNDA3ZjY3MzhiZTMwNmQzYjGgxGXD: ]] 00:21:29.556 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDlkZjRlY2MyNjA5ZTRmNDA3ZjY3MzhiZTMwNmQzYjGgxGXD: 00:21:29.556 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:21:29.556 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:29.556 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:29.556 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:29.556 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:29.556 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:29.556 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:29.556 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.556 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.556 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.556 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:29.556 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:29.556 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:29.556 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:29.556 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:29.556 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:29.556 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:29.556 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:29.556 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:29.556 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:29.556 09:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:29.557 09:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:29.557 09:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.557 09:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.815 nvme0n1 00:21:29.815 09:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.074 09:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:30.074 09:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:30.074 09:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.074 09:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.074 09:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.074 09:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.074 09:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:30.074 09:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.074 09:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.074 09:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.074 09:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:30.074 09:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:21:30.074 09:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:30.074 09:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:30.074 09:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:30.074 09:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:30.074 09:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTMzZjNmNmVhYWIzYWI1NWIyNmRlMmY4ZGI4MjQ1ZWFmMTY5YTNjNGU2NjJiNDQ09flMPg==: 00:21:30.074 09:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmNlNzBmOTRiNzA2Zjg0M2MyNzYwYjdlMjY3OGQxM2KR8UvU: 00:21:30.074 09:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:30.074 09:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:30.074 09:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTMzZjNmNmVhYWIzYWI1NWIyNmRlMmY4ZGI4MjQ1ZWFmMTY5YTNjNGU2NjJiNDQ09flMPg==: 00:21:30.074 09:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmNlNzBmOTRiNzA2Zjg0M2MyNzYwYjdlMjY3OGQxM2KR8UvU: ]] 00:21:30.074 09:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmNlNzBmOTRiNzA2Zjg0M2MyNzYwYjdlMjY3OGQxM2KR8UvU: 00:21:30.074 09:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:21:30.074 09:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:30.074 09:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:30.074 09:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:30.074 09:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:30.074 09:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:30.074 09:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:30.074 09:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.074 09:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.074 09:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.074 09:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:30.074 09:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:30.074 09:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:30.074 09:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:30.074 09:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:30.074 09:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:30.074 09:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:30.074 09:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:30.074 09:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:30.074 09:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:30.074 09:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:30.074 09:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:30.074 09:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.074 09:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.641 nvme0n1 00:21:30.641 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.641 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:30.641 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.641 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:30.641 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.641 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.641 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.641 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:30.641 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.641 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.641 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.641 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:30.641 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:21:30.641 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:30.641 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:30.641 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:30.641 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:30.641 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmI2ZTIwZmYwNWZlNjUyNzJlZmVkMmUyMmQ4NDM3YTcxNWRhZTIzYTlmNjVhYTcwMmUzMzk2ZWVkYjdhMjMxY4TwsEM=: 00:21:30.641 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:30.641 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:30.641 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:30.641 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmI2ZTIwZmYwNWZlNjUyNzJlZmVkMmUyMmQ4NDM3YTcxNWRhZTIzYTlmNjVhYTcwMmUzMzk2ZWVkYjdhMjMxY4TwsEM=: 00:21:30.641 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:30.641 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:21:30.641 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:30.641 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:30.641 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:30.641 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:30.641 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:30.641 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:30.641 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.641 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.641 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.641 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:30.641 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:30.641 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:30.641 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:30.641 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:30.641 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:30.641 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:30.641 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:30.641 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:30.641 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:30.641 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:30.641 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:30.641 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.641 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.208 nvme0n1 00:21:31.208 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.208 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:31.208 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.208 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:31.208 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.208 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.208 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.208 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:31.208 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.208 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.208 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.208 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:31.208 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:31.208 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:31.208 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:31.208 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:31.208 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjBkODE4NDY4MDdhMzNmMmM5Y2M5NGY5YTcwZTQzNTRmNDgzYWQzNTNiOWQ3NmNiMjvVtg==: 00:21:31.208 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTFkNTVhZjQwZWM5ZmY4ODU2NzEyMWZmNTI5NDk1Mzg5Yjk0YmFkZmI4NDk1MTE1/ttDVg==: 00:21:31.208 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:31.208 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:31.208 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjBkODE4NDY4MDdhMzNmMmM5Y2M5NGY5YTcwZTQzNTRmNDgzYWQzNTNiOWQ3NmNiMjvVtg==: 00:21:31.208 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTFkNTVhZjQwZWM5ZmY4ODU2NzEyMWZmNTI5NDk1Mzg5Yjk0YmFkZmI4NDk1MTE1/ttDVg==: ]] 00:21:31.208 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTFkNTVhZjQwZWM5ZmY4ODU2NzEyMWZmNTI5NDk1Mzg5Yjk0YmFkZmI4NDk1MTE1/ttDVg==: 00:21:31.208 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:31.208 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.208 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.208 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.208 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:21:31.208 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:31.208 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:31.208 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:31.208 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:31.208 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:31.208 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:31.208 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:31.208 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:31.208 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:31.208 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:31.208 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:31.208 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:21:31.208 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:31.208 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:31.208 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:31.208 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:31.208 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:31.208 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:31.208 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.208 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.208 request: 00:21:31.208 { 00:21:31.208 "name": "nvme0", 00:21:31.208 "trtype": "tcp", 00:21:31.208 "traddr": "10.0.0.1", 00:21:31.208 "adrfam": "ipv4", 00:21:31.208 "trsvcid": "4420", 00:21:31.208 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:31.208 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:31.208 "prchk_reftag": false, 00:21:31.208 "prchk_guard": false, 00:21:31.208 "hdgst": false, 00:21:31.208 "ddgst": false, 00:21:31.208 "allow_unrecognized_csi": false, 00:21:31.208 "method": "bdev_nvme_attach_controller", 00:21:31.208 "req_id": 1 00:21:31.208 } 00:21:31.208 Got JSON-RPC error response 00:21:31.208 response: 00:21:31.208 { 00:21:31.208 "code": -5, 00:21:31.208 "message": "Input/output error" 00:21:31.208 } 00:21:31.208 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:31.208 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:21:31.208 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:31.208 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:31.209 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:31.209 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:21:31.209 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.209 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.209 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:21:31.209 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.467 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:21:31.467 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:21:31.467 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:31.467 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:31.467 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:31.467 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:31.467 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:31.467 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:31.467 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:31.467 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:31.467 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:31.467 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:31.467 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:31.467 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:21:31.467 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:31.467 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:31.467 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:31.467 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:31.467 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:31.467 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:31.467 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.467 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.467 request: 00:21:31.467 { 00:21:31.467 "name": "nvme0", 00:21:31.467 "trtype": "tcp", 00:21:31.467 "traddr": "10.0.0.1", 00:21:31.467 "adrfam": "ipv4", 00:21:31.467 "trsvcid": "4420", 00:21:31.467 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:31.467 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:31.467 "prchk_reftag": false, 00:21:31.467 "prchk_guard": false, 00:21:31.467 "hdgst": false, 00:21:31.467 "ddgst": false, 00:21:31.467 "dhchap_key": "key2", 00:21:31.467 "allow_unrecognized_csi": false, 00:21:31.467 "method": "bdev_nvme_attach_controller", 00:21:31.467 "req_id": 1 00:21:31.467 } 00:21:31.467 Got JSON-RPC error response 00:21:31.467 response: 00:21:31.467 { 00:21:31.467 "code": -5, 00:21:31.467 "message": "Input/output error" 00:21:31.467 } 00:21:31.467 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:31.467 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:21:31.467 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:31.467 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:31.467 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:31.467 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:21:31.467 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:21:31.467 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.467 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.467 09:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.467 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:21:31.467 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:21:31.467 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:31.467 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:31.467 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:31.467 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:31.467 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:31.467 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:31.467 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:31.467 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:31.467 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:31.467 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:31.467 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:31.467 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:21:31.467 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:31.467 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:31.467 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:31.467 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:31.467 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:31.467 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:31.467 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.467 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.467 request: 00:21:31.467 { 00:21:31.467 "name": "nvme0", 00:21:31.467 "trtype": "tcp", 00:21:31.467 "traddr": "10.0.0.1", 00:21:31.467 "adrfam": "ipv4", 00:21:31.467 "trsvcid": "4420", 00:21:31.467 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:31.467 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:31.467 "prchk_reftag": false, 00:21:31.467 "prchk_guard": false, 00:21:31.467 "hdgst": false, 00:21:31.467 "ddgst": false, 00:21:31.467 "dhchap_key": "key1", 00:21:31.467 "dhchap_ctrlr_key": "ckey2", 00:21:31.467 "allow_unrecognized_csi": false, 00:21:31.467 "method": "bdev_nvme_attach_controller", 00:21:31.467 "req_id": 1 00:21:31.467 } 00:21:31.467 Got JSON-RPC error response 00:21:31.467 response: 00:21:31.467 { 00:21:31.467 "code": -5, 00:21:31.467 "message": "Input/output error" 00:21:31.467 } 00:21:31.467 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:31.467 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:21:31.467 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:31.467 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:31.467 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:31.467 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:21:31.467 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:31.467 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:31.467 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:31.467 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:31.467 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:31.467 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:31.467 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:31.467 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:31.467 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:31.467 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:31.467 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:31.467 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.468 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.726 nvme0n1 00:21:31.726 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.726 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:21:31.726 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:31.726 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:31.726 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:31.726 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:31.726 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTRmOTYwOWNhNzRiYzhjOWNkZDRhYWZlNDA3ZjgxM2PFU+dB: 00:21:31.726 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDlkZjRlY2MyNjA5ZTRmNDA3ZjY3MzhiZTMwNmQzYjGgxGXD: 00:21:31.726 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:31.726 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:31.726 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTRmOTYwOWNhNzRiYzhjOWNkZDRhYWZlNDA3ZjgxM2PFU+dB: 00:21:31.726 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDlkZjRlY2MyNjA5ZTRmNDA3ZjY3MzhiZTMwNmQzYjGgxGXD: ]] 00:21:31.726 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDlkZjRlY2MyNjA5ZTRmNDA3ZjY3MzhiZTMwNmQzYjGgxGXD: 00:21:31.726 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.726 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.726 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.726 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.726 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:21:31.726 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:21:31.726 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.727 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.727 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.727 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.727 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:31.727 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:21:31.727 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:31.727 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:31.727 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:31.727 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:31.727 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:31.727 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:31.727 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.727 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.727 request: 00:21:31.727 { 00:21:31.727 "name": "nvme0", 00:21:31.727 "dhchap_key": "key1", 00:21:31.727 "dhchap_ctrlr_key": "ckey2", 00:21:31.727 "method": "bdev_nvme_set_keys", 00:21:31.727 "req_id": 1 00:21:31.727 } 00:21:31.727 Got JSON-RPC error response 00:21:31.727 response: 00:21:31.727 { 00:21:31.727 "code": -13, 00:21:31.727 "message": "Permission denied" 00:21:31.727 } 00:21:31.727 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:31.727 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:21:31.727 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:31.727 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:31.727 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:31.727 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:21:31.727 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.727 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.727 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:21:31.727 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.727 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:21:31.727 09:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:21:32.664 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:21:32.664 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:21:32.664 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.664 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:32.664 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.923 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:21:32.923 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:32.923 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:32.923 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:32.923 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:32.923 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:32.923 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjBkODE4NDY4MDdhMzNmMmM5Y2M5NGY5YTcwZTQzNTRmNDgzYWQzNTNiOWQ3NmNiMjvVtg==: 00:21:32.923 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTFkNTVhZjQwZWM5ZmY4ODU2NzEyMWZmNTI5NDk1Mzg5Yjk0YmFkZmI4NDk1MTE1/ttDVg==: 00:21:32.923 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:32.923 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:32.923 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjBkODE4NDY4MDdhMzNmMmM5Y2M5NGY5YTcwZTQzNTRmNDgzYWQzNTNiOWQ3NmNiMjvVtg==: 00:21:32.923 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTFkNTVhZjQwZWM5ZmY4ODU2NzEyMWZmNTI5NDk1Mzg5Yjk0YmFkZmI4NDk1MTE1/ttDVg==: ]] 00:21:32.923 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTFkNTVhZjQwZWM5ZmY4ODU2NzEyMWZmNTI5NDk1Mzg5Yjk0YmFkZmI4NDk1MTE1/ttDVg==: 00:21:32.923 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:21:32.923 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:32.923 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:32.923 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:32.923 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:32.923 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:32.923 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:32.923 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:32.923 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:32.923 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:32.923 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:32.923 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:32.923 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.923 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:32.923 nvme0n1 00:21:32.923 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.923 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:21:32.923 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:32.923 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:32.923 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:32.923 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:32.923 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTRmOTYwOWNhNzRiYzhjOWNkZDRhYWZlNDA3ZjgxM2PFU+dB: 00:21:32.923 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDlkZjRlY2MyNjA5ZTRmNDA3ZjY3MzhiZTMwNmQzYjGgxGXD: 00:21:32.923 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:32.923 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:32.923 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTRmOTYwOWNhNzRiYzhjOWNkZDRhYWZlNDA3ZjgxM2PFU+dB: 00:21:32.923 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDlkZjRlY2MyNjA5ZTRmNDA3ZjY3MzhiZTMwNmQzYjGgxGXD: ]] 00:21:32.923 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDlkZjRlY2MyNjA5ZTRmNDA3ZjY3MzhiZTMwNmQzYjGgxGXD: 00:21:32.923 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:21:32.924 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:21:32.924 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:21:32.924 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:32.924 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:32.924 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:32.924 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:32.924 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:21:32.924 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.924 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:32.924 request: 00:21:32.924 { 00:21:32.924 "name": "nvme0", 00:21:32.924 "dhchap_key": "key2", 00:21:32.924 "dhchap_ctrlr_key": "ckey1", 00:21:32.924 "method": "bdev_nvme_set_keys", 00:21:32.924 "req_id": 1 00:21:32.924 } 00:21:32.924 Got JSON-RPC error response 00:21:32.924 response: 00:21:32.924 { 00:21:32.924 "code": -13, 00:21:32.924 "message": "Permission denied" 00:21:32.924 } 00:21:32.924 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:32.924 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:21:32.924 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:32.924 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:32.924 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:32.924 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:21:32.924 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:21:32.924 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.924 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:32.924 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.924 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:21:32.924 09:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:21:34.300 09:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:21:34.300 09:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:21:34.300 09:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.300 09:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.300 09:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.300 09:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:21:34.300 09:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:21:34.300 09:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:21:34.300 09:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:21:34.300 09:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:34.300 09:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:21:34.300 09:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:34.300 09:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:21:34.300 09:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:34.300 09:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:34.300 rmmod nvme_tcp 00:21:34.300 rmmod nvme_fabrics 00:21:34.300 09:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:34.300 09:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:21:34.300 09:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:21:34.300 09:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 78046 ']' 00:21:34.300 09:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 78046 00:21:34.300 09:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 78046 ']' 00:21:34.300 09:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 78046 00:21:34.300 09:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:21:34.300 09:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:34.300 09:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78046 00:21:34.300 killing process with pid 78046 00:21:34.300 09:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:34.300 09:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:34.300 09:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78046' 00:21:34.300 09:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 78046 00:21:34.300 09:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 78046 00:21:34.300 09:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:34.300 09:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:34.300 09:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:34.300 09:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:21:34.300 09:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:21:34.300 09:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:21:34.300 09:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:34.300 09:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:34.300 09:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:34.300 09:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:34.300 09:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:34.300 09:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:34.300 09:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:34.300 09:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:34.559 09:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:34.559 09:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:34.559 09:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:34.559 09:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:34.559 09:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:34.559 09:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:34.559 09:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:34.559 09:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:34.559 09:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:34.559 09:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:34.559 09:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:34.559 09:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.559 09:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:21:34.559 09:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:21:34.559 09:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:21:34.559 09:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:21:34.559 09:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:21:34.559 09:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:21:34.559 09:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:34.559 09:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:21:34.559 09:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:34.559 09:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:34.853 09:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:21:34.853 09:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:21:34.853 09:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:35.420 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:35.679 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:35.679 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:35.679 09:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.o3M /tmp/spdk.key-null.3ZT /tmp/spdk.key-sha256.ZKl /tmp/spdk.key-sha384.urz /tmp/spdk.key-sha512.Z82 /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:21:35.679 09:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:36.246 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:36.246 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:36.246 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:36.504 00:21:36.504 real 0m36.455s 00:21:36.504 user 0m33.460s 00:21:36.504 sys 0m5.382s 00:21:36.504 ************************************ 00:21:36.504 END TEST nvmf_auth_host 00:21:36.504 ************************************ 00:21:36.504 09:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:36.504 09:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.504 09:32:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:21:36.504 09:32:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:21:36.504 09:32:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:36.504 09:32:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:36.504 09:32:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.504 ************************************ 00:21:36.504 START TEST nvmf_digest 00:21:36.504 ************************************ 00:21:36.504 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:21:36.504 * Looking for test storage... 00:21:36.504 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:36.504 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:36.504 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:21:36.504 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:36.765 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:36.765 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:36.765 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:36.765 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:36.765 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:21:36.765 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:21:36.765 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:21:36.765 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:21:36.765 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:21:36.765 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:21:36.765 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:21:36.765 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:36.765 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:21:36.765 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:21:36.765 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:36.765 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:36.765 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:21:36.765 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:21:36.765 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:36.765 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:21:36.765 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:21:36.765 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:21:36.765 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:21:36.765 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:36.765 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:21:36.765 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:21:36.765 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:36.765 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:36.765 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:21:36.765 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:36.765 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:36.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.765 --rc genhtml_branch_coverage=1 00:21:36.765 --rc genhtml_function_coverage=1 00:21:36.765 --rc genhtml_legend=1 00:21:36.765 --rc geninfo_all_blocks=1 00:21:36.765 --rc geninfo_unexecuted_blocks=1 00:21:36.765 00:21:36.765 ' 00:21:36.765 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:36.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.765 --rc genhtml_branch_coverage=1 00:21:36.765 --rc genhtml_function_coverage=1 00:21:36.765 --rc genhtml_legend=1 00:21:36.765 --rc geninfo_all_blocks=1 00:21:36.765 --rc geninfo_unexecuted_blocks=1 00:21:36.765 00:21:36.765 ' 00:21:36.765 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:36.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.765 --rc genhtml_branch_coverage=1 00:21:36.765 --rc genhtml_function_coverage=1 00:21:36.765 --rc genhtml_legend=1 00:21:36.765 --rc geninfo_all_blocks=1 00:21:36.765 --rc geninfo_unexecuted_blocks=1 00:21:36.765 00:21:36.765 ' 00:21:36.765 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:36.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.765 --rc genhtml_branch_coverage=1 00:21:36.765 --rc genhtml_function_coverage=1 00:21:36.765 --rc genhtml_legend=1 00:21:36.765 --rc geninfo_all_blocks=1 00:21:36.765 --rc geninfo_unexecuted_blocks=1 00:21:36.765 00:21:36.765 ' 00:21:36.765 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:36.765 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:21:36.765 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:36.765 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:36.765 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:36.765 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:36.765 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:36.765 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:36.765 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:36.765 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:36.765 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:36.765 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:36.765 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:21:36.765 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:21:36.765 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:36.765 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:36.765 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:36.765 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:36.766 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:36.766 Cannot find device "nvmf_init_br" 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:36.766 Cannot find device "nvmf_init_br2" 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:36.766 Cannot find device "nvmf_tgt_br" 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:36.766 Cannot find device "nvmf_tgt_br2" 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:36.766 Cannot find device "nvmf_init_br" 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:36.766 Cannot find device "nvmf_init_br2" 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:36.766 Cannot find device "nvmf_tgt_br" 00:21:36.766 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:21:36.767 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:36.767 Cannot find device "nvmf_tgt_br2" 00:21:36.767 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:21:36.767 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:37.026 Cannot find device "nvmf_br" 00:21:37.026 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:21:37.026 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:37.026 Cannot find device "nvmf_init_if" 00:21:37.026 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:21:37.026 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:37.026 Cannot find device "nvmf_init_if2" 00:21:37.026 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:21:37.026 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:37.026 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:37.026 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:21:37.026 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:37.026 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:37.026 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:21:37.026 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:37.026 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:37.026 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:37.026 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:37.026 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:37.026 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:37.026 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:37.026 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:37.026 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:37.026 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:37.026 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:37.026 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:37.026 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:37.026 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:37.026 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:37.026 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:37.026 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:37.026 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:37.026 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:37.026 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:37.026 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:37.026 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:37.026 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:37.286 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:37.286 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:37.286 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:37.286 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:37.286 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:37.286 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:37.286 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:37.286 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:37.286 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:37.286 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:37.286 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:37.286 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.101 ms 00:21:37.286 00:21:37.286 --- 10.0.0.3 ping statistics --- 00:21:37.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:37.286 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:21:37.286 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:37.286 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:37.286 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.077 ms 00:21:37.286 00:21:37.286 --- 10.0.0.4 ping statistics --- 00:21:37.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:37.286 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:21:37.286 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:37.286 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:37.286 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:21:37.286 00:21:37.286 --- 10.0.0.1 ping statistics --- 00:21:37.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:37.286 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:21:37.286 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:37.286 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:37.286 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:21:37.286 00:21:37.286 --- 10.0.0.2 ping statistics --- 00:21:37.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:37.286 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:21:37.286 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:37.286 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:21:37.286 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:37.286 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:37.286 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:37.286 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:37.286 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:37.286 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:37.286 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:37.286 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:37.286 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:21:37.286 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:21:37.286 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:37.286 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:37.286 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:37.286 ************************************ 00:21:37.286 START TEST nvmf_digest_clean 00:21:37.286 ************************************ 00:21:37.286 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:21:37.286 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:21:37.286 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:21:37.286 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:21:37.286 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:21:37.286 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:21:37.286 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:37.286 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:37.286 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:37.286 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=79687 00:21:37.286 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:37.286 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 79687 00:21:37.286 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79687 ']' 00:21:37.286 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:37.286 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:37.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:37.286 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:37.286 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:37.286 09:32:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:37.286 [2024-12-09 09:32:14.964246] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:21:37.286 [2024-12-09 09:32:14.964307] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:37.543 [2024-12-09 09:32:15.116040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.543 [2024-12-09 09:32:15.157378] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:37.543 [2024-12-09 09:32:15.157432] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:37.543 [2024-12-09 09:32:15.157448] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:37.543 [2024-12-09 09:32:15.157471] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:37.543 [2024-12-09 09:32:15.157483] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:37.544 [2024-12-09 09:32:15.157843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:38.110 09:32:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:38.110 09:32:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:21:38.110 09:32:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:38.110 09:32:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:38.110 09:32:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:38.369 09:32:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:38.369 09:32:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:21:38.369 09:32:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:21:38.369 09:32:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:21:38.369 09:32:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.369 09:32:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:38.369 [2024-12-09 09:32:15.922426] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:38.369 null0 00:21:38.369 [2024-12-09 09:32:15.967693] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:38.369 [2024-12-09 09:32:15.991780] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:38.369 09:32:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.369 09:32:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:21:38.369 09:32:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:38.370 09:32:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:38.370 09:32:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:21:38.370 09:32:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:21:38.370 09:32:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:21:38.370 09:32:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:38.370 09:32:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79719 00:21:38.370 09:32:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:21:38.370 09:32:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79719 /var/tmp/bperf.sock 00:21:38.370 09:32:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79719 ']' 00:21:38.370 09:32:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:38.370 09:32:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:38.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:38.370 09:32:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:38.370 09:32:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:38.370 09:32:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:38.370 [2024-12-09 09:32:16.048821] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:21:38.370 [2024-12-09 09:32:16.048886] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79719 ] 00:21:38.628 [2024-12-09 09:32:16.200868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.628 [2024-12-09 09:32:16.244112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:39.196 09:32:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:39.196 09:32:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:21:39.196 09:32:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:39.196 09:32:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:39.196 09:32:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:39.454 [2024-12-09 09:32:17.144835] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:39.712 09:32:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:39.712 09:32:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:39.970 nvme0n1 00:21:39.970 09:32:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:39.970 09:32:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:39.970 Running I/O for 2 seconds... 00:21:41.944 18542.00 IOPS, 72.43 MiB/s [2024-12-09T09:32:19.667Z] 18034.00 IOPS, 70.45 MiB/s 00:21:41.944 Latency(us) 00:21:41.944 [2024-12-09T09:32:19.667Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:41.944 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:41.944 nvme0n1 : 2.01 18019.87 70.39 0.00 0.00 7098.69 6343.04 19476.56 00:21:41.944 [2024-12-09T09:32:19.667Z] =================================================================================================================== 00:21:41.944 [2024-12-09T09:32:19.667Z] Total : 18019.87 70.39 0.00 0.00 7098.69 6343.04 19476.56 00:21:41.944 { 00:21:41.944 "results": [ 00:21:41.944 { 00:21:41.944 "job": "nvme0n1", 00:21:41.944 "core_mask": "0x2", 00:21:41.944 "workload": "randread", 00:21:41.944 "status": "finished", 00:21:41.944 "queue_depth": 128, 00:21:41.944 "io_size": 4096, 00:21:41.944 "runtime": 2.008672, 00:21:41.944 "iops": 18019.865861624, 00:21:41.944 "mibps": 70.39010102196875, 00:21:41.944 "io_failed": 0, 00:21:41.944 "io_timeout": 0, 00:21:41.944 "avg_latency_us": 7098.692791277832, 00:21:41.944 "min_latency_us": 6343.042570281124, 00:21:41.944 "max_latency_us": 19476.562248995982 00:21:41.944 } 00:21:41.944 ], 00:21:41.944 "core_count": 1 00:21:41.944 } 00:21:41.944 09:32:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:41.944 09:32:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:41.944 09:32:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:41.944 | select(.opcode=="crc32c") 00:21:41.944 | "\(.module_name) \(.executed)"' 00:21:41.944 09:32:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:41.944 09:32:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:42.203 09:32:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:42.203 09:32:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:42.203 09:32:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:42.203 09:32:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:42.203 09:32:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79719 00:21:42.203 09:32:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79719 ']' 00:21:42.203 09:32:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79719 00:21:42.203 09:32:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:21:42.203 09:32:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:42.203 09:32:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79719 00:21:42.203 killing process with pid 79719 00:21:42.203 Received shutdown signal, test time was about 2.000000 seconds 00:21:42.203 00:21:42.203 Latency(us) 00:21:42.203 [2024-12-09T09:32:19.926Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:42.203 [2024-12-09T09:32:19.926Z] =================================================================================================================== 00:21:42.203 [2024-12-09T09:32:19.926Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:42.203 09:32:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:42.203 09:32:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:42.203 09:32:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79719' 00:21:42.203 09:32:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79719 00:21:42.203 09:32:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79719 00:21:42.485 09:32:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:21:42.485 09:32:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:42.485 09:32:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:42.485 09:32:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:21:42.485 09:32:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:21:42.485 09:32:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:21:42.485 09:32:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:42.485 09:32:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79778 00:21:42.485 09:32:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:21:42.485 09:32:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79778 /var/tmp/bperf.sock 00:21:42.485 09:32:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79778 ']' 00:21:42.485 09:32:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:42.485 09:32:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:42.485 09:32:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:42.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:42.485 09:32:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:42.485 09:32:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:42.485 [2024-12-09 09:32:20.112803] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:21:42.485 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:42.485 Zero copy mechanism will not be used. 00:21:42.485 [2024-12-09 09:32:20.112878] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79778 ] 00:21:42.743 [2024-12-09 09:32:20.258850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.743 [2024-12-09 09:32:20.309190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:43.679 09:32:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:43.679 09:32:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:21:43.679 09:32:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:43.679 09:32:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:43.679 09:32:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:43.679 [2024-12-09 09:32:21.359828] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:43.938 09:32:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:43.938 09:32:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:44.196 nvme0n1 00:21:44.196 09:32:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:44.196 09:32:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:44.196 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:44.196 Zero copy mechanism will not be used. 00:21:44.196 Running I/O for 2 seconds... 00:21:46.505 8320.00 IOPS, 1040.00 MiB/s [2024-12-09T09:32:24.228Z] 8584.00 IOPS, 1073.00 MiB/s 00:21:46.505 Latency(us) 00:21:46.505 [2024-12-09T09:32:24.228Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.505 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:21:46.505 nvme0n1 : 2.00 8582.64 1072.83 0.00 0.00 1861.45 1677.88 5816.65 00:21:46.505 [2024-12-09T09:32:24.228Z] =================================================================================================================== 00:21:46.505 [2024-12-09T09:32:24.228Z] Total : 8582.64 1072.83 0.00 0.00 1861.45 1677.88 5816.65 00:21:46.505 { 00:21:46.505 "results": [ 00:21:46.505 { 00:21:46.505 "job": "nvme0n1", 00:21:46.505 "core_mask": "0x2", 00:21:46.505 "workload": "randread", 00:21:46.505 "status": "finished", 00:21:46.505 "queue_depth": 16, 00:21:46.505 "io_size": 131072, 00:21:46.505 "runtime": 2.002181, 00:21:46.505 "iops": 8582.640630392558, 00:21:46.505 "mibps": 1072.8300787990697, 00:21:46.505 "io_failed": 0, 00:21:46.505 "io_timeout": 0, 00:21:46.505 "avg_latency_us": 1861.451580624173, 00:21:46.505 "min_latency_us": 1677.879518072289, 00:21:46.505 "max_latency_us": 5816.6489959839355 00:21:46.505 } 00:21:46.505 ], 00:21:46.505 "core_count": 1 00:21:46.505 } 00:21:46.505 09:32:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:46.505 09:32:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:46.505 09:32:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:46.505 09:32:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:46.505 09:32:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:46.505 | select(.opcode=="crc32c") 00:21:46.505 | "\(.module_name) \(.executed)"' 00:21:46.505 09:32:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:46.505 09:32:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:46.505 09:32:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:46.505 09:32:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:46.505 09:32:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79778 00:21:46.505 09:32:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79778 ']' 00:21:46.505 09:32:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79778 00:21:46.505 09:32:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:21:46.505 09:32:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:46.505 09:32:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79778 00:21:46.505 killing process with pid 79778 00:21:46.505 Received shutdown signal, test time was about 2.000000 seconds 00:21:46.505 00:21:46.505 Latency(us) 00:21:46.505 [2024-12-09T09:32:24.228Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.505 [2024-12-09T09:32:24.228Z] =================================================================================================================== 00:21:46.505 [2024-12-09T09:32:24.228Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:46.505 09:32:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:46.505 09:32:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:46.505 09:32:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79778' 00:21:46.505 09:32:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79778 00:21:46.505 09:32:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79778 00:21:46.767 09:32:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:21:46.767 09:32:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:46.767 09:32:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:46.767 09:32:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:21:46.767 09:32:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:21:46.767 09:32:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:21:46.767 09:32:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:46.767 09:32:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79834 00:21:46.767 09:32:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:21:46.767 09:32:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79834 /var/tmp/bperf.sock 00:21:46.767 09:32:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79834 ']' 00:21:46.767 09:32:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:46.767 09:32:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:46.767 09:32:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:46.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:46.767 09:32:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:46.767 09:32:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:46.767 [2024-12-09 09:32:24.390492] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:21:46.767 [2024-12-09 09:32:24.390566] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79834 ] 00:21:47.025 [2024-12-09 09:32:24.541560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.025 [2024-12-09 09:32:24.588729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:47.592 09:32:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:47.592 09:32:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:21:47.592 09:32:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:47.592 09:32:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:47.592 09:32:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:47.849 [2024-12-09 09:32:25.477440] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:47.849 09:32:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:47.849 09:32:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:48.106 nvme0n1 00:21:48.106 09:32:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:48.106 09:32:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:48.364 Running I/O for 2 seconds... 00:21:50.230 19305.00 IOPS, 75.41 MiB/s [2024-12-09T09:32:27.953Z] 19431.50 IOPS, 75.90 MiB/s 00:21:50.230 Latency(us) 00:21:50.230 [2024-12-09T09:32:27.953Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:50.230 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:50.230 nvme0n1 : 2.00 19456.42 76.00 0.00 0.00 6573.82 6000.89 14739.02 00:21:50.230 [2024-12-09T09:32:27.953Z] =================================================================================================================== 00:21:50.230 [2024-12-09T09:32:27.953Z] Total : 19456.42 76.00 0.00 0.00 6573.82 6000.89 14739.02 00:21:50.230 { 00:21:50.230 "results": [ 00:21:50.230 { 00:21:50.230 "job": "nvme0n1", 00:21:50.230 "core_mask": "0x2", 00:21:50.230 "workload": "randwrite", 00:21:50.230 "status": "finished", 00:21:50.230 "queue_depth": 128, 00:21:50.230 "io_size": 4096, 00:21:50.230 "runtime": 2.004017, 00:21:50.230 "iops": 19456.421776861174, 00:21:50.230 "mibps": 76.00164756586396, 00:21:50.230 "io_failed": 0, 00:21:50.230 "io_timeout": 0, 00:21:50.230 "avg_latency_us": 6573.816491438298, 00:21:50.230 "min_latency_us": 6000.8867469879515, 00:21:50.230 "max_latency_us": 14739.020080321285 00:21:50.230 } 00:21:50.230 ], 00:21:50.230 "core_count": 1 00:21:50.230 } 00:21:50.231 09:32:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:50.231 09:32:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:50.489 09:32:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:50.489 09:32:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:50.489 09:32:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:50.489 | select(.opcode=="crc32c") 00:21:50.489 | "\(.module_name) \(.executed)"' 00:21:50.489 09:32:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:50.489 09:32:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:50.489 09:32:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:50.489 09:32:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:50.489 09:32:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79834 00:21:50.489 09:32:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79834 ']' 00:21:50.489 09:32:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79834 00:21:50.489 09:32:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:21:50.489 09:32:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:50.489 09:32:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79834 00:21:50.748 killing process with pid 79834 00:21:50.749 Received shutdown signal, test time was about 2.000000 seconds 00:21:50.749 00:21:50.749 Latency(us) 00:21:50.749 [2024-12-09T09:32:28.472Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:50.749 [2024-12-09T09:32:28.472Z] =================================================================================================================== 00:21:50.749 [2024-12-09T09:32:28.472Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:50.749 09:32:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:50.749 09:32:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:50.749 09:32:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79834' 00:21:50.749 09:32:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79834 00:21:50.749 09:32:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79834 00:21:50.749 09:32:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:21:50.749 09:32:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:50.749 09:32:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:50.749 09:32:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:21:50.749 09:32:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:21:50.749 09:32:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:21:50.749 09:32:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:50.749 09:32:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79895 00:21:50.749 09:32:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:21:50.749 09:32:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79895 /var/tmp/bperf.sock 00:21:50.749 09:32:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79895 ']' 00:21:50.749 09:32:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:50.749 09:32:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:50.749 09:32:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:50.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:50.749 09:32:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:50.749 09:32:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:50.749 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:50.749 Zero copy mechanism will not be used. 00:21:50.749 [2024-12-09 09:32:28.460756] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:21:50.749 [2024-12-09 09:32:28.460826] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79895 ] 00:21:51.008 [2024-12-09 09:32:28.614662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:51.008 [2024-12-09 09:32:28.662627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:51.945 09:32:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:51.945 09:32:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:21:51.945 09:32:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:51.945 09:32:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:51.945 09:32:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:51.945 [2024-12-09 09:32:29.608908] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:52.204 09:32:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:52.204 09:32:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:52.204 nvme0n1 00:21:52.463 09:32:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:52.463 09:32:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:52.463 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:52.463 Zero copy mechanism will not be used. 00:21:52.463 Running I/O for 2 seconds... 00:21:54.337 8566.00 IOPS, 1070.75 MiB/s [2024-12-09T09:32:32.060Z] 8719.50 IOPS, 1089.94 MiB/s 00:21:54.337 Latency(us) 00:21:54.337 [2024-12-09T09:32:32.060Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:54.337 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:21:54.337 nvme0n1 : 2.00 8714.90 1089.36 0.00 0.00 1832.46 1309.40 6632.56 00:21:54.337 [2024-12-09T09:32:32.060Z] =================================================================================================================== 00:21:54.337 [2024-12-09T09:32:32.060Z] Total : 8714.90 1089.36 0.00 0.00 1832.46 1309.40 6632.56 00:21:54.337 { 00:21:54.337 "results": [ 00:21:54.337 { 00:21:54.337 "job": "nvme0n1", 00:21:54.337 "core_mask": "0x2", 00:21:54.337 "workload": "randwrite", 00:21:54.337 "status": "finished", 00:21:54.337 "queue_depth": 16, 00:21:54.337 "io_size": 131072, 00:21:54.337 "runtime": 2.002777, 00:21:54.337 "iops": 8714.899362235536, 00:21:54.337 "mibps": 1089.362420279442, 00:21:54.337 "io_failed": 0, 00:21:54.337 "io_timeout": 0, 00:21:54.337 "avg_latency_us": 1832.4618266810799, 00:21:54.337 "min_latency_us": 1309.4040160642571, 00:21:54.337 "max_latency_us": 6632.559036144578 00:21:54.337 } 00:21:54.337 ], 00:21:54.337 "core_count": 1 00:21:54.337 } 00:21:54.597 09:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:54.597 09:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:54.597 09:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:54.597 09:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:54.597 09:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:54.597 | select(.opcode=="crc32c") 00:21:54.597 | "\(.module_name) \(.executed)"' 00:21:54.597 09:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:54.597 09:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:54.597 09:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:54.597 09:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:54.597 09:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79895 00:21:54.597 09:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79895 ']' 00:21:54.597 09:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79895 00:21:54.597 09:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:21:54.597 09:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:54.597 09:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79895 00:21:54.875 killing process with pid 79895 00:21:54.875 Received shutdown signal, test time was about 2.000000 seconds 00:21:54.875 00:21:54.875 Latency(us) 00:21:54.875 [2024-12-09T09:32:32.598Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:54.875 [2024-12-09T09:32:32.598Z] =================================================================================================================== 00:21:54.875 [2024-12-09T09:32:32.598Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:54.875 09:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:54.875 09:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:54.875 09:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79895' 00:21:54.875 09:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79895 00:21:54.875 09:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79895 00:21:54.875 09:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 79687 00:21:54.875 09:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79687 ']' 00:21:54.875 09:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79687 00:21:54.875 09:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:21:54.875 09:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:54.875 09:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79687 00:21:54.875 killing process with pid 79687 00:21:54.875 09:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:54.875 09:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:54.875 09:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79687' 00:21:54.875 09:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79687 00:21:54.875 09:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79687 00:21:55.133 00:21:55.133 real 0m17.781s 00:21:55.133 user 0m33.476s 00:21:55.133 sys 0m5.330s 00:21:55.133 09:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:55.133 ************************************ 00:21:55.133 END TEST nvmf_digest_clean 00:21:55.133 ************************************ 00:21:55.133 09:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:55.133 09:32:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:21:55.133 09:32:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:55.133 09:32:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:55.133 09:32:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:55.133 ************************************ 00:21:55.133 START TEST nvmf_digest_error 00:21:55.133 ************************************ 00:21:55.133 09:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:21:55.133 09:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:21:55.133 09:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:55.133 09:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:55.133 09:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:55.133 09:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=79977 00:21:55.133 09:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:55.133 09:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 79977 00:21:55.133 09:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 79977 ']' 00:21:55.133 09:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:55.133 09:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:55.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:55.133 09:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:55.133 09:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:55.133 09:32:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:55.133 [2024-12-09 09:32:32.823947] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:21:55.133 [2024-12-09 09:32:32.824017] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:55.392 [2024-12-09 09:32:32.976711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.392 [2024-12-09 09:32:33.019520] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:55.392 [2024-12-09 09:32:33.019576] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:55.392 [2024-12-09 09:32:33.019587] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:55.392 [2024-12-09 09:32:33.019596] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:55.392 [2024-12-09 09:32:33.019604] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:55.392 [2024-12-09 09:32:33.019892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:56.326 09:32:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:56.326 09:32:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:21:56.326 09:32:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:56.326 09:32:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:56.326 09:32:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:56.326 09:32:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:56.326 09:32:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:21:56.326 09:32:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.326 09:32:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:56.326 [2024-12-09 09:32:33.763184] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:21:56.326 09:32:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.326 09:32:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:21:56.326 09:32:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:21:56.326 09:32:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.326 09:32:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:56.326 [2024-12-09 09:32:33.817825] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:56.326 null0 00:21:56.326 [2024-12-09 09:32:33.864081] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:56.326 [2024-12-09 09:32:33.888169] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:56.326 09:32:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.326 09:32:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:21:56.326 09:32:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:56.326 09:32:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:21:56.326 09:32:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:21:56.326 09:32:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:21:56.326 09:32:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:21:56.326 09:32:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80009 00:21:56.326 09:32:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80009 /var/tmp/bperf.sock 00:21:56.326 09:32:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80009 ']' 00:21:56.326 09:32:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:56.326 09:32:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:56.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:56.326 09:32:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:56.326 09:32:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:56.326 09:32:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:56.326 [2024-12-09 09:32:33.964077] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:21:56.326 [2024-12-09 09:32:33.964166] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80009 ] 00:21:56.585 [2024-12-09 09:32:34.119569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:56.585 [2024-12-09 09:32:34.168363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:56.585 [2024-12-09 09:32:34.212716] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:57.153 09:32:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:57.153 09:32:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:21:57.153 09:32:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:57.153 09:32:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:57.411 09:32:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:57.411 09:32:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.411 09:32:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:57.411 09:32:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.411 09:32:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:57.411 09:32:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:57.668 nvme0n1 00:21:57.668 09:32:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:21:57.668 09:32:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.668 09:32:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:57.668 09:32:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.668 09:32:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:57.668 09:32:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:57.924 Running I/O for 2 seconds... 00:21:57.924 [2024-12-09 09:32:35.482814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:57.924 [2024-12-09 09:32:35.482870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.924 [2024-12-09 09:32:35.482885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.924 [2024-12-09 09:32:35.497105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:57.924 [2024-12-09 09:32:35.497148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.924 [2024-12-09 09:32:35.497161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.925 [2024-12-09 09:32:35.511430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:57.925 [2024-12-09 09:32:35.511486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.925 [2024-12-09 09:32:35.511499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.925 [2024-12-09 09:32:35.525664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:57.925 [2024-12-09 09:32:35.525702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.925 [2024-12-09 09:32:35.525716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.925 [2024-12-09 09:32:35.539718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:57.925 [2024-12-09 09:32:35.539770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.925 [2024-12-09 09:32:35.539784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.925 [2024-12-09 09:32:35.553839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:57.925 [2024-12-09 09:32:35.553887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.925 [2024-12-09 09:32:35.553918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.925 [2024-12-09 09:32:35.567918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:57.925 [2024-12-09 09:32:35.567954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.925 [2024-12-09 09:32:35.567967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.925 [2024-12-09 09:32:35.582072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:57.925 [2024-12-09 09:32:35.582110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.925 [2024-12-09 09:32:35.582123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.925 [2024-12-09 09:32:35.596199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:57.925 [2024-12-09 09:32:35.596234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.925 [2024-12-09 09:32:35.596263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.925 [2024-12-09 09:32:35.610291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:57.925 [2024-12-09 09:32:35.610330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.925 [2024-12-09 09:32:35.610343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.925 [2024-12-09 09:32:35.624412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:57.925 [2024-12-09 09:32:35.624449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.925 [2024-12-09 09:32:35.624487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:57.925 [2024-12-09 09:32:35.638544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:57.925 [2024-12-09 09:32:35.638581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.925 [2024-12-09 09:32:35.638594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.183 [2024-12-09 09:32:35.652710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.183 [2024-12-09 09:32:35.652759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.183 [2024-12-09 09:32:35.652772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.183 [2024-12-09 09:32:35.666816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.183 [2024-12-09 09:32:35.666861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:9153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.183 [2024-12-09 09:32:35.666875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.183 [2024-12-09 09:32:35.680869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.183 [2024-12-09 09:32:35.680905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.183 [2024-12-09 09:32:35.680917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.183 [2024-12-09 09:32:35.695054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.183 [2024-12-09 09:32:35.695102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.183 [2024-12-09 09:32:35.695115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.183 [2024-12-09 09:32:35.709236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.183 [2024-12-09 09:32:35.709274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.183 [2024-12-09 09:32:35.709287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.183 [2024-12-09 09:32:35.723355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.183 [2024-12-09 09:32:35.723390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.183 [2024-12-09 09:32:35.723403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.183 [2024-12-09 09:32:35.737403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.183 [2024-12-09 09:32:35.737440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.184 [2024-12-09 09:32:35.737452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.184 [2024-12-09 09:32:35.751580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.184 [2024-12-09 09:32:35.751615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.184 [2024-12-09 09:32:35.751627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.184 [2024-12-09 09:32:35.765651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.184 [2024-12-09 09:32:35.765687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.184 [2024-12-09 09:32:35.765700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.184 [2024-12-09 09:32:35.779778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.184 [2024-12-09 09:32:35.779813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.184 [2024-12-09 09:32:35.779840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.184 [2024-12-09 09:32:35.793915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.184 [2024-12-09 09:32:35.793950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.184 [2024-12-09 09:32:35.793962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.184 [2024-12-09 09:32:35.808093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.184 [2024-12-09 09:32:35.808130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.184 [2024-12-09 09:32:35.808143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.184 [2024-12-09 09:32:35.822227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.184 [2024-12-09 09:32:35.822260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.184 [2024-12-09 09:32:35.822273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.184 [2024-12-09 09:32:35.836406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.184 [2024-12-09 09:32:35.836442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.184 [2024-12-09 09:32:35.836454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.184 [2024-12-09 09:32:35.850568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.184 [2024-12-09 09:32:35.850605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.184 [2024-12-09 09:32:35.850618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.184 [2024-12-09 09:32:35.864676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.184 [2024-12-09 09:32:35.864712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.184 [2024-12-09 09:32:35.864723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.184 [2024-12-09 09:32:35.878880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.184 [2024-12-09 09:32:35.878917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.184 [2024-12-09 09:32:35.878930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.184 [2024-12-09 09:32:35.893051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.184 [2024-12-09 09:32:35.893089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.184 [2024-12-09 09:32:35.893113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.442 [2024-12-09 09:32:35.907239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.442 [2024-12-09 09:32:35.907274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.442 [2024-12-09 09:32:35.907286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.442 [2024-12-09 09:32:35.921404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.442 [2024-12-09 09:32:35.921442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.442 [2024-12-09 09:32:35.921454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.442 [2024-12-09 09:32:35.935724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.442 [2024-12-09 09:32:35.935758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.442 [2024-12-09 09:32:35.935770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.442 [2024-12-09 09:32:35.949886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.442 [2024-12-09 09:32:35.949923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.442 [2024-12-09 09:32:35.949935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.442 [2024-12-09 09:32:35.963970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.442 [2024-12-09 09:32:35.964006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.442 [2024-12-09 09:32:35.964018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.442 [2024-12-09 09:32:35.977951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.442 [2024-12-09 09:32:35.977986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:25012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.442 [2024-12-09 09:32:35.977998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.442 [2024-12-09 09:32:35.992059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.442 [2024-12-09 09:32:35.992097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.442 [2024-12-09 09:32:35.992109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.442 [2024-12-09 09:32:36.006369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.442 [2024-12-09 09:32:36.006406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.442 [2024-12-09 09:32:36.006420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.442 [2024-12-09 09:32:36.020602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.442 [2024-12-09 09:32:36.020637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.442 [2024-12-09 09:32:36.020650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.442 [2024-12-09 09:32:36.035059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.442 [2024-12-09 09:32:36.035096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:23219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.442 [2024-12-09 09:32:36.035109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.442 [2024-12-09 09:32:36.049389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.442 [2024-12-09 09:32:36.049425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:17552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.442 [2024-12-09 09:32:36.049453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.442 [2024-12-09 09:32:36.063777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.442 [2024-12-09 09:32:36.063812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:14573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.442 [2024-12-09 09:32:36.063824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.442 [2024-12-09 09:32:36.078116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.442 [2024-12-09 09:32:36.078152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.442 [2024-12-09 09:32:36.078166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.442 [2024-12-09 09:32:36.092988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.442 [2024-12-09 09:32:36.093025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.442 [2024-12-09 09:32:36.093038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.442 [2024-12-09 09:32:36.107242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.442 [2024-12-09 09:32:36.107281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.442 [2024-12-09 09:32:36.107295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.442 [2024-12-09 09:32:36.121557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.442 [2024-12-09 09:32:36.121595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.442 [2024-12-09 09:32:36.121609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.442 [2024-12-09 09:32:36.136019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.442 [2024-12-09 09:32:36.136057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:16511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.442 [2024-12-09 09:32:36.136071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.442 [2024-12-09 09:32:36.150307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.442 [2024-12-09 09:32:36.150343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.442 [2024-12-09 09:32:36.150356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.700 [2024-12-09 09:32:36.164615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.700 [2024-12-09 09:32:36.164650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:25180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.700 [2024-12-09 09:32:36.164663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.700 [2024-12-09 09:32:36.179034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.700 [2024-12-09 09:32:36.179072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.700 [2024-12-09 09:32:36.179085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.700 [2024-12-09 09:32:36.193406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.700 [2024-12-09 09:32:36.193445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.700 [2024-12-09 09:32:36.193469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.700 [2024-12-09 09:32:36.207882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.700 [2024-12-09 09:32:36.207920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:11076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.700 [2024-12-09 09:32:36.207933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.700 [2024-12-09 09:32:36.222493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.700 [2024-12-09 09:32:36.222529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.700 [2024-12-09 09:32:36.222543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.700 [2024-12-09 09:32:36.237228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.700 [2024-12-09 09:32:36.237280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:6299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.700 [2024-12-09 09:32:36.237293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.700 [2024-12-09 09:32:36.252059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.700 [2024-12-09 09:32:36.252096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.700 [2024-12-09 09:32:36.252109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.700 [2024-12-09 09:32:36.267082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.700 [2024-12-09 09:32:36.267119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.700 [2024-12-09 09:32:36.267132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.700 [2024-12-09 09:32:36.282271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.700 [2024-12-09 09:32:36.282312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:7573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.700 [2024-12-09 09:32:36.282325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.700 [2024-12-09 09:32:36.297431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.700 [2024-12-09 09:32:36.297500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.700 [2024-12-09 09:32:36.297531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.700 [2024-12-09 09:32:36.312326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.700 [2024-12-09 09:32:36.312365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.700 [2024-12-09 09:32:36.312378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.700 [2024-12-09 09:32:36.326864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.700 [2024-12-09 09:32:36.326900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.700 [2024-12-09 09:32:36.326915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.700 [2024-12-09 09:32:36.341225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.700 [2024-12-09 09:32:36.341261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.701 [2024-12-09 09:32:36.341273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.701 [2024-12-09 09:32:36.355450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.701 [2024-12-09 09:32:36.355498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.701 [2024-12-09 09:32:36.355512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.701 [2024-12-09 09:32:36.369471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.701 [2024-12-09 09:32:36.369521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.701 [2024-12-09 09:32:36.369557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.701 [2024-12-09 09:32:36.389776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.701 [2024-12-09 09:32:36.389812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.701 [2024-12-09 09:32:36.389832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.701 [2024-12-09 09:32:36.404056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.701 [2024-12-09 09:32:36.404091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.701 [2024-12-09 09:32:36.404104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.701 [2024-12-09 09:32:36.418095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.701 [2024-12-09 09:32:36.418130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.701 [2024-12-09 09:32:36.418142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.958 [2024-12-09 09:32:36.432260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.958 [2024-12-09 09:32:36.432298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.958 [2024-12-09 09:32:36.432312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.958 [2024-12-09 09:32:36.446360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.958 [2024-12-09 09:32:36.446395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.958 [2024-12-09 09:32:36.446407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.958 17585.00 IOPS, 68.69 MiB/s [2024-12-09T09:32:36.681Z] [2024-12-09 09:32:36.460583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.958 [2024-12-09 09:32:36.460620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.958 [2024-12-09 09:32:36.460644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.958 [2024-12-09 09:32:36.474669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.958 [2024-12-09 09:32:36.474704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.959 [2024-12-09 09:32:36.474716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.959 [2024-12-09 09:32:36.488775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.959 [2024-12-09 09:32:36.488813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.959 [2024-12-09 09:32:36.488826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.959 [2024-12-09 09:32:36.502954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.959 [2024-12-09 09:32:36.502990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.959 [2024-12-09 09:32:36.503003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.959 [2024-12-09 09:32:36.517036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.959 [2024-12-09 09:32:36.517074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.959 [2024-12-09 09:32:36.517087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.959 [2024-12-09 09:32:36.531112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.959 [2024-12-09 09:32:36.531149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.959 [2024-12-09 09:32:36.531162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.959 [2024-12-09 09:32:36.545195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.959 [2024-12-09 09:32:36.545231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:8155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.959 [2024-12-09 09:32:36.545243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.959 [2024-12-09 09:32:36.559381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.959 [2024-12-09 09:32:36.559420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.959 [2024-12-09 09:32:36.559433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.959 [2024-12-09 09:32:36.573555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.959 [2024-12-09 09:32:36.573590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.959 [2024-12-09 09:32:36.573603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.959 [2024-12-09 09:32:36.587646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.959 [2024-12-09 09:32:36.587684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.959 [2024-12-09 09:32:36.587697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.959 [2024-12-09 09:32:36.601764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.959 [2024-12-09 09:32:36.601802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.959 [2024-12-09 09:32:36.601816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.959 [2024-12-09 09:32:36.615909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.959 [2024-12-09 09:32:36.615945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:10191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.959 [2024-12-09 09:32:36.615958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.959 [2024-12-09 09:32:36.630107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.959 [2024-12-09 09:32:36.630143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.959 [2024-12-09 09:32:36.630157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.959 [2024-12-09 09:32:36.644428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.959 [2024-12-09 09:32:36.644482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.959 [2024-12-09 09:32:36.644497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.959 [2024-12-09 09:32:36.658731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.959 [2024-12-09 09:32:36.658766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.959 [2024-12-09 09:32:36.658778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:58.959 [2024-12-09 09:32:36.672917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:58.959 [2024-12-09 09:32:36.672968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:10124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.959 [2024-12-09 09:32:36.672980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.217 [2024-12-09 09:32:36.687161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:59.217 [2024-12-09 09:32:36.687199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:18177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.217 [2024-12-09 09:32:36.687212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.217 [2024-12-09 09:32:36.701295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:59.217 [2024-12-09 09:32:36.701332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.217 [2024-12-09 09:32:36.701344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.217 [2024-12-09 09:32:36.715443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:59.217 [2024-12-09 09:32:36.715492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.217 [2024-12-09 09:32:36.715506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.217 [2024-12-09 09:32:36.729721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:59.217 [2024-12-09 09:32:36.729756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:21041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.217 [2024-12-09 09:32:36.729769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.217 [2024-12-09 09:32:36.743988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:59.217 [2024-12-09 09:32:36.744024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.217 [2024-12-09 09:32:36.744038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.217 [2024-12-09 09:32:36.758249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:59.217 [2024-12-09 09:32:36.758286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.217 [2024-12-09 09:32:36.758299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.217 [2024-12-09 09:32:36.772363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:59.217 [2024-12-09 09:32:36.772399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.217 [2024-12-09 09:32:36.772412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.217 [2024-12-09 09:32:36.786604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:59.217 [2024-12-09 09:32:36.786639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.217 [2024-12-09 09:32:36.786651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.217 [2024-12-09 09:32:36.800620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:59.217 [2024-12-09 09:32:36.800655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.217 [2024-12-09 09:32:36.800668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.217 [2024-12-09 09:32:36.814815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:59.217 [2024-12-09 09:32:36.814851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.217 [2024-12-09 09:32:36.814862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.217 [2024-12-09 09:32:36.828869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:59.217 [2024-12-09 09:32:36.828903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.217 [2024-12-09 09:32:36.828916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.217 [2024-12-09 09:32:36.843034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:59.217 [2024-12-09 09:32:36.843071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.217 [2024-12-09 09:32:36.843085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.217 [2024-12-09 09:32:36.857161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:59.217 [2024-12-09 09:32:36.857197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.217 [2024-12-09 09:32:36.857209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.217 [2024-12-09 09:32:36.871326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:59.217 [2024-12-09 09:32:36.871363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.217 [2024-12-09 09:32:36.871376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.217 [2024-12-09 09:32:36.885556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:59.217 [2024-12-09 09:32:36.885593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.217 [2024-12-09 09:32:36.885606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.217 [2024-12-09 09:32:36.899752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:59.217 [2024-12-09 09:32:36.899787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.217 [2024-12-09 09:32:36.899801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.217 [2024-12-09 09:32:36.913851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:59.218 [2024-12-09 09:32:36.913885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:18301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.218 [2024-12-09 09:32:36.913913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.218 [2024-12-09 09:32:36.927990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:59.218 [2024-12-09 09:32:36.928026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.218 [2024-12-09 09:32:36.928040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.483 [2024-12-09 09:32:36.942109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:59.483 [2024-12-09 09:32:36.942145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:11951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.483 [2024-12-09 09:32:36.942170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.483 [2024-12-09 09:32:36.956304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:59.483 [2024-12-09 09:32:36.956339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.483 [2024-12-09 09:32:36.956352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.483 [2024-12-09 09:32:36.970517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:59.483 [2024-12-09 09:32:36.970554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.483 [2024-12-09 09:32:36.970567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.483 [2024-12-09 09:32:36.984564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:59.483 [2024-12-09 09:32:36.984600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.483 [2024-12-09 09:32:36.984611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.483 [2024-12-09 09:32:36.998791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:59.483 [2024-12-09 09:32:36.998827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.483 [2024-12-09 09:32:36.998841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.483 [2024-12-09 09:32:37.013031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:59.483 [2024-12-09 09:32:37.013065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.483 [2024-12-09 09:32:37.013077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.483 [2024-12-09 09:32:37.027399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:59.483 [2024-12-09 09:32:37.027435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.483 [2024-12-09 09:32:37.027448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.483 [2024-12-09 09:32:37.041698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:59.483 [2024-12-09 09:32:37.041734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.483 [2024-12-09 09:32:37.041748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.483 [2024-12-09 09:32:37.055884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:59.483 [2024-12-09 09:32:37.055919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:17017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.483 [2024-12-09 09:32:37.055931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.483 [2024-12-09 09:32:37.070152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:59.483 [2024-12-09 09:32:37.070188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.483 [2024-12-09 09:32:37.070201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.483 [2024-12-09 09:32:37.084345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:59.483 [2024-12-09 09:32:37.084380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.483 [2024-12-09 09:32:37.084392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.483 [2024-12-09 09:32:37.098500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:59.483 [2024-12-09 09:32:37.098535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:7303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.483 [2024-12-09 09:32:37.098548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.483 [2024-12-09 09:32:37.112784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:59.483 [2024-12-09 09:32:37.112822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.483 [2024-12-09 09:32:37.112835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.483 [2024-12-09 09:32:37.127175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:59.483 [2024-12-09 09:32:37.127211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:6449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.483 [2024-12-09 09:32:37.127223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.483 [2024-12-09 09:32:37.141363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:59.483 [2024-12-09 09:32:37.141419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.483 [2024-12-09 09:32:37.141433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.483 [2024-12-09 09:32:37.155658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:59.483 [2024-12-09 09:32:37.155694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.483 [2024-12-09 09:32:37.155706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.484 [2024-12-09 09:32:37.169867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:59.484 [2024-12-09 09:32:37.169902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.484 [2024-12-09 09:32:37.169915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.484 [2024-12-09 09:32:37.184021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:59.484 [2024-12-09 09:32:37.184056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.484 [2024-12-09 09:32:37.184069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.484 [2024-12-09 09:32:37.198159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:59.484 [2024-12-09 09:32:37.198193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.484 [2024-12-09 09:32:37.198206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.753 [2024-12-09 09:32:37.212525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:59.753 [2024-12-09 09:32:37.212563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.753 [2024-12-09 09:32:37.212575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.753 [2024-12-09 09:32:37.226772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:59.753 [2024-12-09 09:32:37.226810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.753 [2024-12-09 09:32:37.226824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.753 [2024-12-09 09:32:37.241105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:59.753 [2024-12-09 09:32:37.241141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.753 [2024-12-09 09:32:37.241155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.753 [2024-12-09 09:32:37.255358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:59.753 [2024-12-09 09:32:37.255395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.753 [2024-12-09 09:32:37.255407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.753 [2024-12-09 09:32:37.269613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:59.753 [2024-12-09 09:32:37.269649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.753 [2024-12-09 09:32:37.269663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.753 [2024-12-09 09:32:37.283882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:59.753 [2024-12-09 09:32:37.283920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.753 [2024-12-09 09:32:37.283933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.753 [2024-12-09 09:32:37.304371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:59.753 [2024-12-09 09:32:37.304409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.753 [2024-12-09 09:32:37.304439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.753 [2024-12-09 09:32:37.318790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:59.753 [2024-12-09 09:32:37.318828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.753 [2024-12-09 09:32:37.318841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.753 [2024-12-09 09:32:37.333222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:59.753 [2024-12-09 09:32:37.333262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.753 [2024-12-09 09:32:37.333275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.753 [2024-12-09 09:32:37.347869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:59.753 [2024-12-09 09:32:37.347907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.753 [2024-12-09 09:32:37.347919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.753 [2024-12-09 09:32:37.362250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:59.753 [2024-12-09 09:32:37.362287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.753 [2024-12-09 09:32:37.362300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.753 [2024-12-09 09:32:37.376589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:59.753 [2024-12-09 09:32:37.376643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.753 [2024-12-09 09:32:37.376657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.753 [2024-12-09 09:32:37.390940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:59.753 [2024-12-09 09:32:37.390994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.753 [2024-12-09 09:32:37.391007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.753 [2024-12-09 09:32:37.405362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:59.753 [2024-12-09 09:32:37.405402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.753 [2024-12-09 09:32:37.405415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.753 [2024-12-09 09:32:37.419836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:59.753 [2024-12-09 09:32:37.419875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.753 [2024-12-09 09:32:37.419888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.753 [2024-12-09 09:32:37.434445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:59.753 [2024-12-09 09:32:37.434494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.753 [2024-12-09 09:32:37.434508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.753 [2024-12-09 09:32:37.448846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:59.753 [2024-12-09 09:32:37.448883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:12074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.753 [2024-12-09 09:32:37.448896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.753 17647.50 IOPS, 68.94 MiB/s [2024-12-09T09:32:37.476Z] [2024-12-09 09:32:37.464568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa7cb50) 00:21:59.753 [2024-12-09 09:32:37.464603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.753 [2024-12-09 09:32:37.464615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:59.753 00:21:59.753 Latency(us) 00:21:59.753 [2024-12-09T09:32:37.476Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:59.753 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:59.753 nvme0n1 : 2.01 17684.26 69.08 0.00 0.00 7232.83 6790.48 27583.02 00:21:59.753 [2024-12-09T09:32:37.476Z] =================================================================================================================== 00:21:59.753 [2024-12-09T09:32:37.476Z] Total : 17684.26 69.08 0.00 0.00 7232.83 6790.48 27583.02 00:21:59.753 { 00:21:59.753 "results": [ 00:21:59.754 { 00:21:59.754 "job": "nvme0n1", 00:21:59.754 "core_mask": "0x2", 00:21:59.754 "workload": "randread", 00:21:59.754 "status": "finished", 00:21:59.754 "queue_depth": 128, 00:21:59.754 "io_size": 4096, 00:21:59.754 "runtime": 2.010262, 00:21:59.754 "iops": 17684.262051414193, 00:21:59.754 "mibps": 69.07914863833669, 00:21:59.754 "io_failed": 0, 00:21:59.754 "io_timeout": 0, 00:21:59.754 "avg_latency_us": 7232.829838577942, 00:21:59.754 "min_latency_us": 6790.477108433735, 00:21:59.754 "max_latency_us": 27583.02329317269 00:21:59.754 } 00:21:59.754 ], 00:21:59.754 "core_count": 1 00:21:59.754 } 00:22:00.013 09:32:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:00.013 09:32:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:00.013 09:32:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:00.013 | .driver_specific 00:22:00.013 | .nvme_error 00:22:00.013 | .status_code 00:22:00.013 | .command_transient_transport_error' 00:22:00.013 09:32:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:00.013 09:32:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 139 > 0 )) 00:22:00.013 09:32:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80009 00:22:00.013 09:32:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80009 ']' 00:22:00.013 09:32:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80009 00:22:00.013 09:32:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:22:00.013 09:32:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:00.013 09:32:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80009 00:22:00.273 09:32:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:00.273 killing process with pid 80009 00:22:00.273 Received shutdown signal, test time was about 2.000000 seconds 00:22:00.273 00:22:00.273 Latency(us) 00:22:00.273 [2024-12-09T09:32:37.996Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:00.273 [2024-12-09T09:32:37.996Z] =================================================================================================================== 00:22:00.273 [2024-12-09T09:32:37.996Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:00.273 09:32:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:00.273 09:32:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80009' 00:22:00.273 09:32:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80009 00:22:00.273 09:32:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80009 00:22:00.273 09:32:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:22:00.273 09:32:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:22:00.273 09:32:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:22:00.273 09:32:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:22:00.273 09:32:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:22:00.273 09:32:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80065 00:22:00.273 09:32:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80065 /var/tmp/bperf.sock 00:22:00.273 09:32:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80065 ']' 00:22:00.273 09:32:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:00.273 09:32:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:00.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:00.273 09:32:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:00.273 09:32:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:00.273 09:32:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:00.273 09:32:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:22:00.533 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:00.533 Zero copy mechanism will not be used. 00:22:00.533 [2024-12-09 09:32:37.997752] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:22:00.533 [2024-12-09 09:32:37.997818] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80065 ] 00:22:00.533 [2024-12-09 09:32:38.149174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.533 [2024-12-09 09:32:38.193861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:00.533 [2024-12-09 09:32:38.236487] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:01.471 09:32:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:01.471 09:32:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:22:01.471 09:32:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:01.471 09:32:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:01.471 09:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:01.471 09:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.471 09:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:01.471 09:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.471 09:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:01.471 09:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:01.730 nvme0n1 00:22:01.730 09:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:22:01.730 09:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.730 09:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:01.730 09:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.730 09:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:01.730 09:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:01.990 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:01.990 Zero copy mechanism will not be used. 00:22:01.990 Running I/O for 2 seconds... 00:22:01.990 [2024-12-09 09:32:39.524675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:01.990 [2024-12-09 09:32:39.524726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.990 [2024-12-09 09:32:39.524757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:01.990 [2024-12-09 09:32:39.528443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:01.990 [2024-12-09 09:32:39.528503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.990 [2024-12-09 09:32:39.528517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:01.990 [2024-12-09 09:32:39.532196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:01.990 [2024-12-09 09:32:39.532233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.990 [2024-12-09 09:32:39.532246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:01.990 [2024-12-09 09:32:39.535940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:01.990 [2024-12-09 09:32:39.535976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.990 [2024-12-09 09:32:39.536004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:01.990 [2024-12-09 09:32:39.539676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:01.990 [2024-12-09 09:32:39.539712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.990 [2024-12-09 09:32:39.539739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:01.990 [2024-12-09 09:32:39.543426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:01.990 [2024-12-09 09:32:39.543472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.990 [2024-12-09 09:32:39.543485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:01.990 [2024-12-09 09:32:39.547108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:01.990 [2024-12-09 09:32:39.547156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.990 [2024-12-09 09:32:39.547184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:01.990 [2024-12-09 09:32:39.550903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:01.990 [2024-12-09 09:32:39.550939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.990 [2024-12-09 09:32:39.550951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:01.990 [2024-12-09 09:32:39.554691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:01.990 [2024-12-09 09:32:39.554724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.990 [2024-12-09 09:32:39.554736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:01.990 [2024-12-09 09:32:39.558451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:01.990 [2024-12-09 09:32:39.558496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.990 [2024-12-09 09:32:39.558509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:01.990 [2024-12-09 09:32:39.562150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:01.990 [2024-12-09 09:32:39.562183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.990 [2024-12-09 09:32:39.562195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:01.990 [2024-12-09 09:32:39.565860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:01.990 [2024-12-09 09:32:39.565894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.990 [2024-12-09 09:32:39.565921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:01.990 [2024-12-09 09:32:39.569577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:01.990 [2024-12-09 09:32:39.569609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.990 [2024-12-09 09:32:39.569637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:01.990 [2024-12-09 09:32:39.573303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:01.990 [2024-12-09 09:32:39.573337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.990 [2024-12-09 09:32:39.573364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:01.990 [2024-12-09 09:32:39.576998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:01.990 [2024-12-09 09:32:39.577032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.990 [2024-12-09 09:32:39.577060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:01.990 [2024-12-09 09:32:39.580732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:01.990 [2024-12-09 09:32:39.580766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.990 [2024-12-09 09:32:39.580793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:01.990 [2024-12-09 09:32:39.584510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:01.990 [2024-12-09 09:32:39.584543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.990 [2024-12-09 09:32:39.584554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:01.990 [2024-12-09 09:32:39.588355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:01.990 [2024-12-09 09:32:39.588391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.990 [2024-12-09 09:32:39.588403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:01.990 [2024-12-09 09:32:39.592073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:01.990 [2024-12-09 09:32:39.592108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.990 [2024-12-09 09:32:39.592121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:01.990 [2024-12-09 09:32:39.595788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:01.990 [2024-12-09 09:32:39.595824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.991 [2024-12-09 09:32:39.595836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:01.991 [2024-12-09 09:32:39.599445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:01.991 [2024-12-09 09:32:39.599489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.991 [2024-12-09 09:32:39.599502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:01.991 [2024-12-09 09:32:39.603220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:01.991 [2024-12-09 09:32:39.603259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.991 [2024-12-09 09:32:39.603271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:01.991 [2024-12-09 09:32:39.606979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:01.991 [2024-12-09 09:32:39.607016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.991 [2024-12-09 09:32:39.607027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:01.991 [2024-12-09 09:32:39.610713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:01.991 [2024-12-09 09:32:39.610752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.991 [2024-12-09 09:32:39.610764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:01.991 [2024-12-09 09:32:39.614500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:01.991 [2024-12-09 09:32:39.614537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.991 [2024-12-09 09:32:39.614549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:01.991 [2024-12-09 09:32:39.618201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:01.991 [2024-12-09 09:32:39.618235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.991 [2024-12-09 09:32:39.618247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:01.991 [2024-12-09 09:32:39.621914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:01.991 [2024-12-09 09:32:39.621948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.991 [2024-12-09 09:32:39.621960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:01.991 [2024-12-09 09:32:39.625624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:01.991 [2024-12-09 09:32:39.625658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.991 [2024-12-09 09:32:39.625670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:01.991 [2024-12-09 09:32:39.629351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:01.991 [2024-12-09 09:32:39.629386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.991 [2024-12-09 09:32:39.629397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:01.991 [2024-12-09 09:32:39.633053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:01.991 [2024-12-09 09:32:39.633088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.991 [2024-12-09 09:32:39.633100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:01.991 [2024-12-09 09:32:39.636719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:01.991 [2024-12-09 09:32:39.636753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.991 [2024-12-09 09:32:39.636764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:01.991 [2024-12-09 09:32:39.640381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:01.991 [2024-12-09 09:32:39.640415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.991 [2024-12-09 09:32:39.640428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:01.991 [2024-12-09 09:32:39.644085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:01.991 [2024-12-09 09:32:39.644119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.991 [2024-12-09 09:32:39.644131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:01.991 [2024-12-09 09:32:39.647835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:01.991 [2024-12-09 09:32:39.647871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.991 [2024-12-09 09:32:39.647883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:01.991 [2024-12-09 09:32:39.651570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:01.991 [2024-12-09 09:32:39.651605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.991 [2024-12-09 09:32:39.651617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:01.991 [2024-12-09 09:32:39.655228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:01.991 [2024-12-09 09:32:39.655262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.991 [2024-12-09 09:32:39.655273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:01.991 [2024-12-09 09:32:39.658873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:01.991 [2024-12-09 09:32:39.658909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.991 [2024-12-09 09:32:39.658921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:01.991 [2024-12-09 09:32:39.662603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:01.991 [2024-12-09 09:32:39.662640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.991 [2024-12-09 09:32:39.662652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:01.991 [2024-12-09 09:32:39.666380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:01.991 [2024-12-09 09:32:39.666416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.991 [2024-12-09 09:32:39.666428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:01.991 [2024-12-09 09:32:39.670092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:01.991 [2024-12-09 09:32:39.670125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.991 [2024-12-09 09:32:39.670136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:01.991 [2024-12-09 09:32:39.673846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:01.991 [2024-12-09 09:32:39.673884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.991 [2024-12-09 09:32:39.673897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:01.991 [2024-12-09 09:32:39.677534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:01.991 [2024-12-09 09:32:39.677567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.991 [2024-12-09 09:32:39.677579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:01.991 [2024-12-09 09:32:39.681235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:01.991 [2024-12-09 09:32:39.681268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.991 [2024-12-09 09:32:39.681281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:01.991 [2024-12-09 09:32:39.684977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:01.991 [2024-12-09 09:32:39.685011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.991 [2024-12-09 09:32:39.685023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:01.991 [2024-12-09 09:32:39.688687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:01.991 [2024-12-09 09:32:39.688721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.991 [2024-12-09 09:32:39.688732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:01.991 [2024-12-09 09:32:39.692349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:01.991 [2024-12-09 09:32:39.692384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.991 [2024-12-09 09:32:39.692396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:01.991 [2024-12-09 09:32:39.696091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:01.991 [2024-12-09 09:32:39.696126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.992 [2024-12-09 09:32:39.696137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:01.992 [2024-12-09 09:32:39.699837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:01.992 [2024-12-09 09:32:39.699871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.992 [2024-12-09 09:32:39.699883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:01.992 [2024-12-09 09:32:39.703575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:01.992 [2024-12-09 09:32:39.703609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.992 [2024-12-09 09:32:39.703621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:01.992 [2024-12-09 09:32:39.707294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:01.992 [2024-12-09 09:32:39.707333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.992 [2024-12-09 09:32:39.707344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.253 [2024-12-09 09:32:39.710987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.253 [2024-12-09 09:32:39.711022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.253 [2024-12-09 09:32:39.711033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.253 [2024-12-09 09:32:39.714705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.253 [2024-12-09 09:32:39.714742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.253 [2024-12-09 09:32:39.714755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.253 [2024-12-09 09:32:39.718432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.253 [2024-12-09 09:32:39.718481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.253 [2024-12-09 09:32:39.718493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.253 [2024-12-09 09:32:39.722114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.253 [2024-12-09 09:32:39.722146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.253 [2024-12-09 09:32:39.722159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.253 [2024-12-09 09:32:39.725892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.253 [2024-12-09 09:32:39.725927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.253 [2024-12-09 09:32:39.725939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.253 [2024-12-09 09:32:39.729614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.253 [2024-12-09 09:32:39.729647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.253 [2024-12-09 09:32:39.729659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.253 [2024-12-09 09:32:39.733302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.253 [2024-12-09 09:32:39.733337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.253 [2024-12-09 09:32:39.733349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.253 [2024-12-09 09:32:39.736969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.253 [2024-12-09 09:32:39.737004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.253 [2024-12-09 09:32:39.737016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.253 [2024-12-09 09:32:39.740685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.253 [2024-12-09 09:32:39.740720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.253 [2024-12-09 09:32:39.740732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.253 [2024-12-09 09:32:39.744387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.253 [2024-12-09 09:32:39.744424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.253 [2024-12-09 09:32:39.744435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.253 [2024-12-09 09:32:39.748125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.253 [2024-12-09 09:32:39.748162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.253 [2024-12-09 09:32:39.748174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.253 [2024-12-09 09:32:39.751845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.253 [2024-12-09 09:32:39.751879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.253 [2024-12-09 09:32:39.751890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.253 [2024-12-09 09:32:39.755558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.253 [2024-12-09 09:32:39.755595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.253 [2024-12-09 09:32:39.755607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.253 [2024-12-09 09:32:39.759276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.254 [2024-12-09 09:32:39.759313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.254 [2024-12-09 09:32:39.759324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.254 [2024-12-09 09:32:39.762977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.254 [2024-12-09 09:32:39.763013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.254 [2024-12-09 09:32:39.763026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.254 [2024-12-09 09:32:39.766800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.254 [2024-12-09 09:32:39.766838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.254 [2024-12-09 09:32:39.766850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.254 [2024-12-09 09:32:39.770696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.254 [2024-12-09 09:32:39.770734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.254 [2024-12-09 09:32:39.770746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.254 [2024-12-09 09:32:39.774414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.254 [2024-12-09 09:32:39.774452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.254 [2024-12-09 09:32:39.774477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.254 [2024-12-09 09:32:39.778076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.254 [2024-12-09 09:32:39.778108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.254 [2024-12-09 09:32:39.778120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.254 [2024-12-09 09:32:39.781735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.254 [2024-12-09 09:32:39.781770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.254 [2024-12-09 09:32:39.781782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.254 [2024-12-09 09:32:39.785484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.254 [2024-12-09 09:32:39.785515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.254 [2024-12-09 09:32:39.785527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.254 [2024-12-09 09:32:39.789187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.254 [2024-12-09 09:32:39.789221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.254 [2024-12-09 09:32:39.789233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.254 [2024-12-09 09:32:39.792899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.254 [2024-12-09 09:32:39.792932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.254 [2024-12-09 09:32:39.792944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.254 [2024-12-09 09:32:39.796579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.254 [2024-12-09 09:32:39.796612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.254 [2024-12-09 09:32:39.796640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.254 [2024-12-09 09:32:39.800274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.254 [2024-12-09 09:32:39.800309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.254 [2024-12-09 09:32:39.800320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.254 [2024-12-09 09:32:39.804035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.254 [2024-12-09 09:32:39.804068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.254 [2024-12-09 09:32:39.804080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.254 [2024-12-09 09:32:39.807804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.254 [2024-12-09 09:32:39.807839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.254 [2024-12-09 09:32:39.807850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.254 [2024-12-09 09:32:39.811554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.254 [2024-12-09 09:32:39.811593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.254 [2024-12-09 09:32:39.811605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.254 [2024-12-09 09:32:39.815226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.254 [2024-12-09 09:32:39.815263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.254 [2024-12-09 09:32:39.815275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.254 [2024-12-09 09:32:39.818969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.254 [2024-12-09 09:32:39.819005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.254 [2024-12-09 09:32:39.819017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.254 [2024-12-09 09:32:39.822654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.254 [2024-12-09 09:32:39.822690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.254 [2024-12-09 09:32:39.822702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.254 [2024-12-09 09:32:39.826379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.254 [2024-12-09 09:32:39.826414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.254 [2024-12-09 09:32:39.826426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.254 [2024-12-09 09:32:39.830067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.254 [2024-12-09 09:32:39.830099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.254 [2024-12-09 09:32:39.830110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.254 [2024-12-09 09:32:39.833724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.254 [2024-12-09 09:32:39.833759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.254 [2024-12-09 09:32:39.833770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.254 [2024-12-09 09:32:39.837481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.254 [2024-12-09 09:32:39.837511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.254 [2024-12-09 09:32:39.837523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.254 [2024-12-09 09:32:39.841132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.254 [2024-12-09 09:32:39.841166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.254 [2024-12-09 09:32:39.841178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.254 [2024-12-09 09:32:39.844900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.254 [2024-12-09 09:32:39.844934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.254 [2024-12-09 09:32:39.844947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.254 [2024-12-09 09:32:39.848620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.254 [2024-12-09 09:32:39.848653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.254 [2024-12-09 09:32:39.848664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.254 [2024-12-09 09:32:39.852325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.254 [2024-12-09 09:32:39.852360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.254 [2024-12-09 09:32:39.852372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.254 [2024-12-09 09:32:39.856055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.254 [2024-12-09 09:32:39.856093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.254 [2024-12-09 09:32:39.856105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.255 [2024-12-09 09:32:39.859726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.255 [2024-12-09 09:32:39.859762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.255 [2024-12-09 09:32:39.859773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.255 [2024-12-09 09:32:39.863438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.255 [2024-12-09 09:32:39.863486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.255 [2024-12-09 09:32:39.863498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.255 [2024-12-09 09:32:39.867137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.255 [2024-12-09 09:32:39.867172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.255 [2024-12-09 09:32:39.867184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.255 [2024-12-09 09:32:39.870890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.255 [2024-12-09 09:32:39.870926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.255 [2024-12-09 09:32:39.870938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.255 [2024-12-09 09:32:39.874604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.255 [2024-12-09 09:32:39.874639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.255 [2024-12-09 09:32:39.874650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.255 [2024-12-09 09:32:39.878299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.255 [2024-12-09 09:32:39.878335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.255 [2024-12-09 09:32:39.878348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.255 [2024-12-09 09:32:39.882017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.255 [2024-12-09 09:32:39.882057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.255 [2024-12-09 09:32:39.882069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.255 [2024-12-09 09:32:39.885836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.255 [2024-12-09 09:32:39.885870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.255 [2024-12-09 09:32:39.885882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.255 [2024-12-09 09:32:39.889527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.255 [2024-12-09 09:32:39.889559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.255 [2024-12-09 09:32:39.889571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.255 [2024-12-09 09:32:39.893218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.255 [2024-12-09 09:32:39.893252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.255 [2024-12-09 09:32:39.893264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.255 [2024-12-09 09:32:39.896973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.255 [2024-12-09 09:32:39.897008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.255 [2024-12-09 09:32:39.897020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.255 [2024-12-09 09:32:39.900726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.255 [2024-12-09 09:32:39.900760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.255 [2024-12-09 09:32:39.900772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.255 [2024-12-09 09:32:39.904418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.255 [2024-12-09 09:32:39.904453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.255 [2024-12-09 09:32:39.904479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.255 [2024-12-09 09:32:39.908175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.255 [2024-12-09 09:32:39.908212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.255 [2024-12-09 09:32:39.908223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.255 [2024-12-09 09:32:39.911879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.255 [2024-12-09 09:32:39.911916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.255 [2024-12-09 09:32:39.911929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.255 [2024-12-09 09:32:39.915621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.255 [2024-12-09 09:32:39.915656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.255 [2024-12-09 09:32:39.915668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.255 [2024-12-09 09:32:39.919382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.255 [2024-12-09 09:32:39.919418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.255 [2024-12-09 09:32:39.919430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.255 [2024-12-09 09:32:39.923096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.255 [2024-12-09 09:32:39.923133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.255 [2024-12-09 09:32:39.923145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.255 [2024-12-09 09:32:39.926836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.255 [2024-12-09 09:32:39.926872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.255 [2024-12-09 09:32:39.926884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.255 [2024-12-09 09:32:39.930567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.255 [2024-12-09 09:32:39.930603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.255 [2024-12-09 09:32:39.930615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.255 [2024-12-09 09:32:39.934296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.255 [2024-12-09 09:32:39.934330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.255 [2024-12-09 09:32:39.934342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.255 [2024-12-09 09:32:39.938061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.255 [2024-12-09 09:32:39.938093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.255 [2024-12-09 09:32:39.938105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.255 [2024-12-09 09:32:39.941749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.255 [2024-12-09 09:32:39.941783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.255 [2024-12-09 09:32:39.941795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.255 [2024-12-09 09:32:39.945471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.255 [2024-12-09 09:32:39.945504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.255 [2024-12-09 09:32:39.945515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.255 [2024-12-09 09:32:39.949226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.255 [2024-12-09 09:32:39.949263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.255 [2024-12-09 09:32:39.949275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.255 [2024-12-09 09:32:39.952977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.255 [2024-12-09 09:32:39.953011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.255 [2024-12-09 09:32:39.953022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.255 [2024-12-09 09:32:39.956723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.255 [2024-12-09 09:32:39.956756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.256 [2024-12-09 09:32:39.956768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.256 [2024-12-09 09:32:39.960396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.256 [2024-12-09 09:32:39.960430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.256 [2024-12-09 09:32:39.960442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.256 [2024-12-09 09:32:39.964094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.256 [2024-12-09 09:32:39.964130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.256 [2024-12-09 09:32:39.964142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.256 [2024-12-09 09:32:39.967830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.256 [2024-12-09 09:32:39.967866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.256 [2024-12-09 09:32:39.967878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.256 [2024-12-09 09:32:39.971503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.256 [2024-12-09 09:32:39.971537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.256 [2024-12-09 09:32:39.971549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.517 [2024-12-09 09:32:39.975199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.517 [2024-12-09 09:32:39.975235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.517 [2024-12-09 09:32:39.975247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.517 [2024-12-09 09:32:39.978991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.517 [2024-12-09 09:32:39.979026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.517 [2024-12-09 09:32:39.979037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.517 [2024-12-09 09:32:39.982697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.517 [2024-12-09 09:32:39.982730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.517 [2024-12-09 09:32:39.982742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.517 [2024-12-09 09:32:39.986418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.517 [2024-12-09 09:32:39.986454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.517 [2024-12-09 09:32:39.986478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.517 [2024-12-09 09:32:39.990104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.517 [2024-12-09 09:32:39.990137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.517 [2024-12-09 09:32:39.990149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.517 [2024-12-09 09:32:39.993818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.517 [2024-12-09 09:32:39.993851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.517 [2024-12-09 09:32:39.993863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.517 [2024-12-09 09:32:39.997503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.517 [2024-12-09 09:32:39.997534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.517 [2024-12-09 09:32:39.997545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.517 [2024-12-09 09:32:40.001187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.517 [2024-12-09 09:32:40.001219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.517 [2024-12-09 09:32:40.001231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.517 [2024-12-09 09:32:40.005162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.517 [2024-12-09 09:32:40.005197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.517 [2024-12-09 09:32:40.005209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.517 [2024-12-09 09:32:40.008838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.517 [2024-12-09 09:32:40.008872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.517 [2024-12-09 09:32:40.008884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.517 [2024-12-09 09:32:40.012538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.517 [2024-12-09 09:32:40.012571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.517 [2024-12-09 09:32:40.012583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.517 [2024-12-09 09:32:40.016216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.517 [2024-12-09 09:32:40.016253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.517 [2024-12-09 09:32:40.016266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.517 [2024-12-09 09:32:40.020058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.518 [2024-12-09 09:32:40.020095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.518 [2024-12-09 09:32:40.020107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.518 [2024-12-09 09:32:40.023802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.518 [2024-12-09 09:32:40.023838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.518 [2024-12-09 09:32:40.023850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.518 [2024-12-09 09:32:40.027553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.518 [2024-12-09 09:32:40.027589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.518 [2024-12-09 09:32:40.027601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.518 [2024-12-09 09:32:40.031261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.518 [2024-12-09 09:32:40.031297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.518 [2024-12-09 09:32:40.031308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.518 [2024-12-09 09:32:40.035001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.518 [2024-12-09 09:32:40.035038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.518 [2024-12-09 09:32:40.035049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.518 [2024-12-09 09:32:40.038754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.518 [2024-12-09 09:32:40.038791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.518 [2024-12-09 09:32:40.038802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.518 [2024-12-09 09:32:40.042455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.518 [2024-12-09 09:32:40.042501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.518 [2024-12-09 09:32:40.042513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.518 [2024-12-09 09:32:40.046171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.518 [2024-12-09 09:32:40.046206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.518 [2024-12-09 09:32:40.046219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.518 [2024-12-09 09:32:40.049883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.518 [2024-12-09 09:32:40.049917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.518 [2024-12-09 09:32:40.049929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.518 [2024-12-09 09:32:40.053553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.518 [2024-12-09 09:32:40.053587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.518 [2024-12-09 09:32:40.053599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.518 [2024-12-09 09:32:40.057237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.518 [2024-12-09 09:32:40.057271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.518 [2024-12-09 09:32:40.057283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.518 [2024-12-09 09:32:40.060911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.518 [2024-12-09 09:32:40.060945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.518 [2024-12-09 09:32:40.060957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.518 [2024-12-09 09:32:40.064584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.518 [2024-12-09 09:32:40.064618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.518 [2024-12-09 09:32:40.064630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.518 [2024-12-09 09:32:40.068257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.518 [2024-12-09 09:32:40.068293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.518 [2024-12-09 09:32:40.068305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.518 [2024-12-09 09:32:40.071956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.518 [2024-12-09 09:32:40.071992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.518 [2024-12-09 09:32:40.072003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.518 [2024-12-09 09:32:40.075654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.518 [2024-12-09 09:32:40.075689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.518 [2024-12-09 09:32:40.075700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.518 [2024-12-09 09:32:40.079313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.518 [2024-12-09 09:32:40.079349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.518 [2024-12-09 09:32:40.079360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.518 [2024-12-09 09:32:40.083102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.518 [2024-12-09 09:32:40.083140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.518 [2024-12-09 09:32:40.083152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.518 [2024-12-09 09:32:40.086845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.518 [2024-12-09 09:32:40.086882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.518 [2024-12-09 09:32:40.086894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.518 [2024-12-09 09:32:40.090497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.518 [2024-12-09 09:32:40.090531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.518 [2024-12-09 09:32:40.090543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.518 [2024-12-09 09:32:40.094222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.518 [2024-12-09 09:32:40.094257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.518 [2024-12-09 09:32:40.094269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.518 [2024-12-09 09:32:40.097985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.518 [2024-12-09 09:32:40.098023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.518 [2024-12-09 09:32:40.098035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.518 [2024-12-09 09:32:40.101687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.518 [2024-12-09 09:32:40.101723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.518 [2024-12-09 09:32:40.101735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.518 [2024-12-09 09:32:40.105377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.518 [2024-12-09 09:32:40.105413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.518 [2024-12-09 09:32:40.105425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.518 [2024-12-09 09:32:40.109106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.518 [2024-12-09 09:32:40.109141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.518 [2024-12-09 09:32:40.109153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.518 [2024-12-09 09:32:40.112837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.518 [2024-12-09 09:32:40.112872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.518 [2024-12-09 09:32:40.112884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.518 [2024-12-09 09:32:40.116530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.518 [2024-12-09 09:32:40.116565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.518 [2024-12-09 09:32:40.116577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.519 [2024-12-09 09:32:40.120266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.519 [2024-12-09 09:32:40.120303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.519 [2024-12-09 09:32:40.120315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.519 [2024-12-09 09:32:40.124070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.519 [2024-12-09 09:32:40.124108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.519 [2024-12-09 09:32:40.124120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.519 [2024-12-09 09:32:40.127796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.519 [2024-12-09 09:32:40.127833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.519 [2024-12-09 09:32:40.127844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.519 [2024-12-09 09:32:40.131495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.519 [2024-12-09 09:32:40.131531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.519 [2024-12-09 09:32:40.131543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.519 [2024-12-09 09:32:40.135182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.519 [2024-12-09 09:32:40.135217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.519 [2024-12-09 09:32:40.135229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.519 [2024-12-09 09:32:40.138936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.519 [2024-12-09 09:32:40.138970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.519 [2024-12-09 09:32:40.138982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.519 [2024-12-09 09:32:40.142662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.519 [2024-12-09 09:32:40.142697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.519 [2024-12-09 09:32:40.142709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.519 [2024-12-09 09:32:40.146331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.519 [2024-12-09 09:32:40.146366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.519 [2024-12-09 09:32:40.146379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.519 [2024-12-09 09:32:40.150036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.519 [2024-12-09 09:32:40.150077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.519 [2024-12-09 09:32:40.150089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.519 [2024-12-09 09:32:40.153856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.519 [2024-12-09 09:32:40.153889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.519 [2024-12-09 09:32:40.153901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.519 [2024-12-09 09:32:40.157576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.519 [2024-12-09 09:32:40.157610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.519 [2024-12-09 09:32:40.157622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.519 [2024-12-09 09:32:40.161190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.519 [2024-12-09 09:32:40.161224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.519 [2024-12-09 09:32:40.161236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.519 [2024-12-09 09:32:40.164888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.519 [2024-12-09 09:32:40.164922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.519 [2024-12-09 09:32:40.164934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.519 [2024-12-09 09:32:40.168625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.519 [2024-12-09 09:32:40.168658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.519 [2024-12-09 09:32:40.168670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.519 [2024-12-09 09:32:40.172286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.519 [2024-12-09 09:32:40.172320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.519 [2024-12-09 09:32:40.172348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.519 [2024-12-09 09:32:40.176022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.519 [2024-12-09 09:32:40.176058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.519 [2024-12-09 09:32:40.176085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.519 [2024-12-09 09:32:40.179746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.519 [2024-12-09 09:32:40.179781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.519 [2024-12-09 09:32:40.179793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.519 [2024-12-09 09:32:40.183421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.519 [2024-12-09 09:32:40.183456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.519 [2024-12-09 09:32:40.183480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.519 [2024-12-09 09:32:40.187630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.519 [2024-12-09 09:32:40.187665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.519 [2024-12-09 09:32:40.187692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.519 [2024-12-09 09:32:40.191349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.519 [2024-12-09 09:32:40.191383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.519 [2024-12-09 09:32:40.191394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.519 [2024-12-09 09:32:40.195073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.519 [2024-12-09 09:32:40.195110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.519 [2024-12-09 09:32:40.195122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.519 [2024-12-09 09:32:40.198837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.519 [2024-12-09 09:32:40.198870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.519 [2024-12-09 09:32:40.198881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.519 [2024-12-09 09:32:40.202514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.519 [2024-12-09 09:32:40.202547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.519 [2024-12-09 09:32:40.202559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.519 [2024-12-09 09:32:40.206238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.519 [2024-12-09 09:32:40.206273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.519 [2024-12-09 09:32:40.206285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.519 [2024-12-09 09:32:40.209999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.519 [2024-12-09 09:32:40.210032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.519 [2024-12-09 09:32:40.210067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.519 [2024-12-09 09:32:40.213715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.519 [2024-12-09 09:32:40.213748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.519 [2024-12-09 09:32:40.213776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.519 [2024-12-09 09:32:40.217432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.519 [2024-12-09 09:32:40.217478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.520 [2024-12-09 09:32:40.217490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.520 [2024-12-09 09:32:40.221077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.520 [2024-12-09 09:32:40.221111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.520 [2024-12-09 09:32:40.221138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.520 [2024-12-09 09:32:40.224800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.520 [2024-12-09 09:32:40.224833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.520 [2024-12-09 09:32:40.224861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.520 [2024-12-09 09:32:40.228522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.520 [2024-12-09 09:32:40.228554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.520 [2024-12-09 09:32:40.228581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.520 [2024-12-09 09:32:40.232216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.520 [2024-12-09 09:32:40.232250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.520 [2024-12-09 09:32:40.232261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.520 [2024-12-09 09:32:40.235991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.520 [2024-12-09 09:32:40.236026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.520 [2024-12-09 09:32:40.236037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.781 [2024-12-09 09:32:40.239726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.781 [2024-12-09 09:32:40.239762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.781 [2024-12-09 09:32:40.239775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.781 [2024-12-09 09:32:40.243436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.781 [2024-12-09 09:32:40.243482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.781 [2024-12-09 09:32:40.243493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.781 [2024-12-09 09:32:40.247075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.781 [2024-12-09 09:32:40.247108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.781 [2024-12-09 09:32:40.247136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.781 [2024-12-09 09:32:40.250849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.781 [2024-12-09 09:32:40.250884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.781 [2024-12-09 09:32:40.250896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.781 [2024-12-09 09:32:40.254560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.781 [2024-12-09 09:32:40.254594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.781 [2024-12-09 09:32:40.254607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.781 [2024-12-09 09:32:40.258212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.781 [2024-12-09 09:32:40.258246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.781 [2024-12-09 09:32:40.258273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.781 [2024-12-09 09:32:40.261938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.781 [2024-12-09 09:32:40.261972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.781 [2024-12-09 09:32:40.262000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.781 [2024-12-09 09:32:40.265650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.781 [2024-12-09 09:32:40.265683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.781 [2024-12-09 09:32:40.265695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.781 [2024-12-09 09:32:40.269292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.781 [2024-12-09 09:32:40.269342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.781 [2024-12-09 09:32:40.269354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.781 [2024-12-09 09:32:40.273048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.781 [2024-12-09 09:32:40.273082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.781 [2024-12-09 09:32:40.273094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.781 [2024-12-09 09:32:40.276718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.781 [2024-12-09 09:32:40.276751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.781 [2024-12-09 09:32:40.276779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.781 [2024-12-09 09:32:40.280435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.781 [2024-12-09 09:32:40.280481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.781 [2024-12-09 09:32:40.280492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.781 [2024-12-09 09:32:40.284096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.781 [2024-12-09 09:32:40.284130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.781 [2024-12-09 09:32:40.284141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.781 [2024-12-09 09:32:40.287851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.781 [2024-12-09 09:32:40.287886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.781 [2024-12-09 09:32:40.287898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.781 [2024-12-09 09:32:40.291525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.781 [2024-12-09 09:32:40.291558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.781 [2024-12-09 09:32:40.291586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.781 [2024-12-09 09:32:40.295243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.782 [2024-12-09 09:32:40.295277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.782 [2024-12-09 09:32:40.295288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.782 [2024-12-09 09:32:40.298947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.782 [2024-12-09 09:32:40.298983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.782 [2024-12-09 09:32:40.298995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.782 [2024-12-09 09:32:40.302657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.782 [2024-12-09 09:32:40.302692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.782 [2024-12-09 09:32:40.302703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.782 [2024-12-09 09:32:40.306318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.782 [2024-12-09 09:32:40.306352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.782 [2024-12-09 09:32:40.306364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.782 [2024-12-09 09:32:40.310019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.782 [2024-12-09 09:32:40.310058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.782 [2024-12-09 09:32:40.310070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.782 [2024-12-09 09:32:40.313732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.782 [2024-12-09 09:32:40.313767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.782 [2024-12-09 09:32:40.313779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.782 [2024-12-09 09:32:40.317512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.782 [2024-12-09 09:32:40.317545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.782 [2024-12-09 09:32:40.317556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.782 [2024-12-09 09:32:40.321235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.782 [2024-12-09 09:32:40.321268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.782 [2024-12-09 09:32:40.321280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.782 [2024-12-09 09:32:40.324894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.782 [2024-12-09 09:32:40.324928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.782 [2024-12-09 09:32:40.324939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.782 [2024-12-09 09:32:40.328651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.782 [2024-12-09 09:32:40.328684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.782 [2024-12-09 09:32:40.328696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.782 [2024-12-09 09:32:40.332377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.782 [2024-12-09 09:32:40.332413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.782 [2024-12-09 09:32:40.332424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.782 [2024-12-09 09:32:40.336064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.782 [2024-12-09 09:32:40.336100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.782 [2024-12-09 09:32:40.336113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.782 [2024-12-09 09:32:40.339762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.782 [2024-12-09 09:32:40.339796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.782 [2024-12-09 09:32:40.339807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.782 [2024-12-09 09:32:40.343502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.782 [2024-12-09 09:32:40.343535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.782 [2024-12-09 09:32:40.343546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.782 [2024-12-09 09:32:40.347241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.782 [2024-12-09 09:32:40.347276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.782 [2024-12-09 09:32:40.347287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.782 [2024-12-09 09:32:40.350996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.782 [2024-12-09 09:32:40.351031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.782 [2024-12-09 09:32:40.351043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.782 [2024-12-09 09:32:40.354701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.782 [2024-12-09 09:32:40.354735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.782 [2024-12-09 09:32:40.354761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.782 [2024-12-09 09:32:40.358386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.782 [2024-12-09 09:32:40.358421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.782 [2024-12-09 09:32:40.358433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.782 [2024-12-09 09:32:40.362014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.782 [2024-12-09 09:32:40.362054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.782 [2024-12-09 09:32:40.362066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.782 [2024-12-09 09:32:40.365757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.782 [2024-12-09 09:32:40.365790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.782 [2024-12-09 09:32:40.365801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.782 [2024-12-09 09:32:40.369455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.782 [2024-12-09 09:32:40.369499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.782 [2024-12-09 09:32:40.369528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.782 [2024-12-09 09:32:40.373142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.782 [2024-12-09 09:32:40.373175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.782 [2024-12-09 09:32:40.373203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.782 [2024-12-09 09:32:40.376845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.782 [2024-12-09 09:32:40.376878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.782 [2024-12-09 09:32:40.376906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.782 [2024-12-09 09:32:40.380521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.782 [2024-12-09 09:32:40.380552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.782 [2024-12-09 09:32:40.380563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.782 [2024-12-09 09:32:40.384211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.782 [2024-12-09 09:32:40.384246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.782 [2024-12-09 09:32:40.384273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.782 [2024-12-09 09:32:40.387963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.782 [2024-12-09 09:32:40.387998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.782 [2024-12-09 09:32:40.388009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.782 [2024-12-09 09:32:40.391673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.782 [2024-12-09 09:32:40.391707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.783 [2024-12-09 09:32:40.391719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.783 [2024-12-09 09:32:40.395303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.783 [2024-12-09 09:32:40.395341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.783 [2024-12-09 09:32:40.395352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.783 [2024-12-09 09:32:40.399064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.783 [2024-12-09 09:32:40.399099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.783 [2024-12-09 09:32:40.399110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.783 [2024-12-09 09:32:40.402788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.783 [2024-12-09 09:32:40.402823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.783 [2024-12-09 09:32:40.402835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.783 [2024-12-09 09:32:40.406527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.783 [2024-12-09 09:32:40.406562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.783 [2024-12-09 09:32:40.406574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.783 [2024-12-09 09:32:40.410235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.783 [2024-12-09 09:32:40.410270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.783 [2024-12-09 09:32:40.410282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.783 [2024-12-09 09:32:40.413914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.783 [2024-12-09 09:32:40.413947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.783 [2024-12-09 09:32:40.413959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.783 [2024-12-09 09:32:40.417643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.783 [2024-12-09 09:32:40.417677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.783 [2024-12-09 09:32:40.417689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.783 [2024-12-09 09:32:40.421368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.783 [2024-12-09 09:32:40.421401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.783 [2024-12-09 09:32:40.421413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.783 [2024-12-09 09:32:40.425070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.783 [2024-12-09 09:32:40.425106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.783 [2024-12-09 09:32:40.425118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.783 [2024-12-09 09:32:40.428772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.783 [2024-12-09 09:32:40.428805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.783 [2024-12-09 09:32:40.428816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.783 [2024-12-09 09:32:40.432484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.783 [2024-12-09 09:32:40.432516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.783 [2024-12-09 09:32:40.432527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.783 [2024-12-09 09:32:40.436247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.783 [2024-12-09 09:32:40.436281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.783 [2024-12-09 09:32:40.436292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.783 [2024-12-09 09:32:40.439949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.783 [2024-12-09 09:32:40.439984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.783 [2024-12-09 09:32:40.439996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.783 [2024-12-09 09:32:40.443705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.783 [2024-12-09 09:32:40.443741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.783 [2024-12-09 09:32:40.443753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.783 [2024-12-09 09:32:40.447410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.783 [2024-12-09 09:32:40.447447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.783 [2024-12-09 09:32:40.447471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.783 [2024-12-09 09:32:40.451213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.783 [2024-12-09 09:32:40.451250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.783 [2024-12-09 09:32:40.451262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.783 [2024-12-09 09:32:40.454981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.783 [2024-12-09 09:32:40.455018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.783 [2024-12-09 09:32:40.455031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.783 [2024-12-09 09:32:40.458722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.783 [2024-12-09 09:32:40.458757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.783 [2024-12-09 09:32:40.458769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.783 [2024-12-09 09:32:40.462459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.783 [2024-12-09 09:32:40.462503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.783 [2024-12-09 09:32:40.462515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.783 [2024-12-09 09:32:40.466209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.783 [2024-12-09 09:32:40.466244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.783 [2024-12-09 09:32:40.466255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.783 [2024-12-09 09:32:40.469889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.783 [2024-12-09 09:32:40.469924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.783 [2024-12-09 09:32:40.469935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.783 [2024-12-09 09:32:40.473609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.783 [2024-12-09 09:32:40.473642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.783 [2024-12-09 09:32:40.473653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.783 [2024-12-09 09:32:40.477259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.783 [2024-12-09 09:32:40.477293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.783 [2024-12-09 09:32:40.477305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.783 [2024-12-09 09:32:40.480952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.783 [2024-12-09 09:32:40.480985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.783 [2024-12-09 09:32:40.480997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.783 [2024-12-09 09:32:40.484656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.783 [2024-12-09 09:32:40.484688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.783 [2024-12-09 09:32:40.484699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:02.783 [2024-12-09 09:32:40.488306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.784 [2024-12-09 09:32:40.488342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.784 [2024-12-09 09:32:40.488353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:02.784 [2024-12-09 09:32:40.492022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.784 [2024-12-09 09:32:40.492056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.784 [2024-12-09 09:32:40.492068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:02.784 [2024-12-09 09:32:40.495727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.784 [2024-12-09 09:32:40.495761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.784 [2024-12-09 09:32:40.495788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:02.784 [2024-12-09 09:32:40.499510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:02.784 [2024-12-09 09:32:40.499544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.784 [2024-12-09 09:32:40.499572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.045 [2024-12-09 09:32:40.503237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.045 [2024-12-09 09:32:40.503273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.045 [2024-12-09 09:32:40.503285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.045 [2024-12-09 09:32:40.506923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.045 [2024-12-09 09:32:40.506959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.045 [2024-12-09 09:32:40.506970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.045 [2024-12-09 09:32:40.510656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.045 [2024-12-09 09:32:40.510693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.045 [2024-12-09 09:32:40.510704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.045 [2024-12-09 09:32:40.514381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.045 [2024-12-09 09:32:40.514417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.045 [2024-12-09 09:32:40.514429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.045 [2024-12-09 09:32:40.518122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.045 [2024-12-09 09:32:40.518161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.045 [2024-12-09 09:32:40.518173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.045 8292.00 IOPS, 1036.50 MiB/s [2024-12-09T09:32:40.768Z] [2024-12-09 09:32:40.522961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.045 [2024-12-09 09:32:40.522998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.045 [2024-12-09 09:32:40.523010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.045 [2024-12-09 09:32:40.526746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.045 [2024-12-09 09:32:40.526781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.045 [2024-12-09 09:32:40.526793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.045 [2024-12-09 09:32:40.530525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.045 [2024-12-09 09:32:40.530559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.045 [2024-12-09 09:32:40.530571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.045 [2024-12-09 09:32:40.534203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.045 [2024-12-09 09:32:40.534238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.045 [2024-12-09 09:32:40.534250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.045 [2024-12-09 09:32:40.537953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.045 [2024-12-09 09:32:40.537987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.045 [2024-12-09 09:32:40.537998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.045 [2024-12-09 09:32:40.541645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.045 [2024-12-09 09:32:40.541679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.045 [2024-12-09 09:32:40.541702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.045 [2024-12-09 09:32:40.545299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.045 [2024-12-09 09:32:40.545332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.045 [2024-12-09 09:32:40.545360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.045 [2024-12-09 09:32:40.549010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.045 [2024-12-09 09:32:40.549042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.045 [2024-12-09 09:32:40.549069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.045 [2024-12-09 09:32:40.552700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.045 [2024-12-09 09:32:40.552734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.045 [2024-12-09 09:32:40.552761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.046 [2024-12-09 09:32:40.556353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.046 [2024-12-09 09:32:40.556387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.046 [2024-12-09 09:32:40.556398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.046 [2024-12-09 09:32:40.560011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.046 [2024-12-09 09:32:40.560047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.046 [2024-12-09 09:32:40.560058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.046 [2024-12-09 09:32:40.563682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.046 [2024-12-09 09:32:40.563717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.046 [2024-12-09 09:32:40.563728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.046 [2024-12-09 09:32:40.567389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.046 [2024-12-09 09:32:40.567424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.046 [2024-12-09 09:32:40.567435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.046 [2024-12-09 09:32:40.571116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.046 [2024-12-09 09:32:40.571153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.046 [2024-12-09 09:32:40.571165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.046 [2024-12-09 09:32:40.574834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.046 [2024-12-09 09:32:40.574869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.046 [2024-12-09 09:32:40.574881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.046 [2024-12-09 09:32:40.578437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.046 [2024-12-09 09:32:40.578482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.046 [2024-12-09 09:32:40.578510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.046 [2024-12-09 09:32:40.582206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.046 [2024-12-09 09:32:40.582244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.046 [2024-12-09 09:32:40.582256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.046 [2024-12-09 09:32:40.585898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.046 [2024-12-09 09:32:40.585932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.046 [2024-12-09 09:32:40.585959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.046 [2024-12-09 09:32:40.589629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.046 [2024-12-09 09:32:40.589662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.046 [2024-12-09 09:32:40.589673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.046 [2024-12-09 09:32:40.593393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.046 [2024-12-09 09:32:40.593425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.046 [2024-12-09 09:32:40.593437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.046 [2024-12-09 09:32:40.597089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.046 [2024-12-09 09:32:40.597124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.046 [2024-12-09 09:32:40.597135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.046 [2024-12-09 09:32:40.600835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.046 [2024-12-09 09:32:40.600868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.046 [2024-12-09 09:32:40.600896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.046 [2024-12-09 09:32:40.604532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.046 [2024-12-09 09:32:40.604563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.046 [2024-12-09 09:32:40.604590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.046 [2024-12-09 09:32:40.608278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.046 [2024-12-09 09:32:40.608313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.046 [2024-12-09 09:32:40.608324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.046 [2024-12-09 09:32:40.612026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.046 [2024-12-09 09:32:40.612061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.046 [2024-12-09 09:32:40.612072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.046 [2024-12-09 09:32:40.615768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.046 [2024-12-09 09:32:40.615803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.046 [2024-12-09 09:32:40.615831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.046 [2024-12-09 09:32:40.619493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.046 [2024-12-09 09:32:40.619527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.046 [2024-12-09 09:32:40.619538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.046 [2024-12-09 09:32:40.623275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.046 [2024-12-09 09:32:40.623311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.046 [2024-12-09 09:32:40.623339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.046 [2024-12-09 09:32:40.627008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.046 [2024-12-09 09:32:40.627043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.046 [2024-12-09 09:32:40.627054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.046 [2024-12-09 09:32:40.630731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.046 [2024-12-09 09:32:40.630768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.046 [2024-12-09 09:32:40.630780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.046 [2024-12-09 09:32:40.634426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.046 [2024-12-09 09:32:40.634474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.046 [2024-12-09 09:32:40.634488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.046 [2024-12-09 09:32:40.638107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.046 [2024-12-09 09:32:40.638139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.046 [2024-12-09 09:32:40.638151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.046 [2024-12-09 09:32:40.641842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.046 [2024-12-09 09:32:40.641874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.046 [2024-12-09 09:32:40.641901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.046 [2024-12-09 09:32:40.645521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.046 [2024-12-09 09:32:40.645551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.046 [2024-12-09 09:32:40.645578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.046 [2024-12-09 09:32:40.649219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.046 [2024-12-09 09:32:40.649253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.046 [2024-12-09 09:32:40.649280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.046 [2024-12-09 09:32:40.652948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.046 [2024-12-09 09:32:40.652982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.046 [2024-12-09 09:32:40.652994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.047 [2024-12-09 09:32:40.656680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.047 [2024-12-09 09:32:40.656714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.047 [2024-12-09 09:32:40.656742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.047 [2024-12-09 09:32:40.660341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.047 [2024-12-09 09:32:40.660376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.047 [2024-12-09 09:32:40.660388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.047 [2024-12-09 09:32:40.664065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.047 [2024-12-09 09:32:40.664101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.047 [2024-12-09 09:32:40.664113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.047 [2024-12-09 09:32:40.667862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.047 [2024-12-09 09:32:40.667897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.047 [2024-12-09 09:32:40.667908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.047 [2024-12-09 09:32:40.671592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.047 [2024-12-09 09:32:40.671626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.047 [2024-12-09 09:32:40.671638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.047 [2024-12-09 09:32:40.675272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.047 [2024-12-09 09:32:40.675308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.047 [2024-12-09 09:32:40.675336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.047 [2024-12-09 09:32:40.679073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.047 [2024-12-09 09:32:40.679109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.047 [2024-12-09 09:32:40.679120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.047 [2024-12-09 09:32:40.682818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.047 [2024-12-09 09:32:40.682854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.047 [2024-12-09 09:32:40.682866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.047 [2024-12-09 09:32:40.686555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.047 [2024-12-09 09:32:40.686591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.047 [2024-12-09 09:32:40.686603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.047 [2024-12-09 09:32:40.690262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.047 [2024-12-09 09:32:40.690297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.047 [2024-12-09 09:32:40.690309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.047 [2024-12-09 09:32:40.693991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.047 [2024-12-09 09:32:40.694025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.047 [2024-12-09 09:32:40.694036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.047 [2024-12-09 09:32:40.697671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.047 [2024-12-09 09:32:40.697703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.047 [2024-12-09 09:32:40.697715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.047 [2024-12-09 09:32:40.701355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.047 [2024-12-09 09:32:40.701388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.047 [2024-12-09 09:32:40.701400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.047 [2024-12-09 09:32:40.705067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.047 [2024-12-09 09:32:40.705104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.047 [2024-12-09 09:32:40.705116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.047 [2024-12-09 09:32:40.708740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.047 [2024-12-09 09:32:40.708774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.047 [2024-12-09 09:32:40.708787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.047 [2024-12-09 09:32:40.712424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.047 [2024-12-09 09:32:40.712471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.047 [2024-12-09 09:32:40.712483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.047 [2024-12-09 09:32:40.716132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.047 [2024-12-09 09:32:40.716166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.047 [2024-12-09 09:32:40.716178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.047 [2024-12-09 09:32:40.719864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.047 [2024-12-09 09:32:40.719900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.047 [2024-12-09 09:32:40.719912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.047 [2024-12-09 09:32:40.723567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.047 [2024-12-09 09:32:40.723602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.047 [2024-12-09 09:32:40.723614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.047 [2024-12-09 09:32:40.727226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.047 [2024-12-09 09:32:40.727263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.047 [2024-12-09 09:32:40.727274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.047 [2024-12-09 09:32:40.730923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.047 [2024-12-09 09:32:40.730959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.047 [2024-12-09 09:32:40.730971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.047 [2024-12-09 09:32:40.734666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.047 [2024-12-09 09:32:40.734702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.047 [2024-12-09 09:32:40.734714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.047 [2024-12-09 09:32:40.738383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.047 [2024-12-09 09:32:40.738420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.047 [2024-12-09 09:32:40.738432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.047 [2024-12-09 09:32:40.742125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.047 [2024-12-09 09:32:40.742159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.047 [2024-12-09 09:32:40.742171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.047 [2024-12-09 09:32:40.745822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.047 [2024-12-09 09:32:40.745854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.047 [2024-12-09 09:32:40.745867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.047 [2024-12-09 09:32:40.749572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.047 [2024-12-09 09:32:40.749605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.047 [2024-12-09 09:32:40.749617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.047 [2024-12-09 09:32:40.753258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.047 [2024-12-09 09:32:40.753292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.048 [2024-12-09 09:32:40.753303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.048 [2024-12-09 09:32:40.756973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.048 [2024-12-09 09:32:40.757008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.048 [2024-12-09 09:32:40.757020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.048 [2024-12-09 09:32:40.760713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.048 [2024-12-09 09:32:40.760749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.048 [2024-12-09 09:32:40.760761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.048 [2024-12-09 09:32:40.764511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.048 [2024-12-09 09:32:40.764544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.048 [2024-12-09 09:32:40.764556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.310 [2024-12-09 09:32:40.768217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.310 [2024-12-09 09:32:40.768254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.310 [2024-12-09 09:32:40.768266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.310 [2024-12-09 09:32:40.771994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.310 [2024-12-09 09:32:40.772030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.310 [2024-12-09 09:32:40.772042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.310 [2024-12-09 09:32:40.775742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.310 [2024-12-09 09:32:40.775778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.310 [2024-12-09 09:32:40.775790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.310 [2024-12-09 09:32:40.779511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.310 [2024-12-09 09:32:40.779546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.310 [2024-12-09 09:32:40.779558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.310 [2024-12-09 09:32:40.783235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.310 [2024-12-09 09:32:40.783274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.310 [2024-12-09 09:32:40.783285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.310 [2024-12-09 09:32:40.786984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.310 [2024-12-09 09:32:40.787022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.310 [2024-12-09 09:32:40.787034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.310 [2024-12-09 09:32:40.790703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.310 [2024-12-09 09:32:40.790739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.310 [2024-12-09 09:32:40.790750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.310 [2024-12-09 09:32:40.794457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.310 [2024-12-09 09:32:40.794507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.310 [2024-12-09 09:32:40.794518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.310 [2024-12-09 09:32:40.798193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.310 [2024-12-09 09:32:40.798228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.310 [2024-12-09 09:32:40.798240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.310 [2024-12-09 09:32:40.801917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.310 [2024-12-09 09:32:40.801950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.310 [2024-12-09 09:32:40.801962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.310 [2024-12-09 09:32:40.805608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.310 [2024-12-09 09:32:40.805642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.310 [2024-12-09 09:32:40.805654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.310 [2024-12-09 09:32:40.809309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.310 [2024-12-09 09:32:40.809343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.310 [2024-12-09 09:32:40.809355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.310 [2024-12-09 09:32:40.813031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.310 [2024-12-09 09:32:40.813065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.310 [2024-12-09 09:32:40.813077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.310 [2024-12-09 09:32:40.816753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.310 [2024-12-09 09:32:40.816786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.310 [2024-12-09 09:32:40.816813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.310 [2024-12-09 09:32:40.820398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.310 [2024-12-09 09:32:40.820432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.310 [2024-12-09 09:32:40.820443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.310 [2024-12-09 09:32:40.824114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.310 [2024-12-09 09:32:40.824149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.310 [2024-12-09 09:32:40.824161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.310 [2024-12-09 09:32:40.827839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.310 [2024-12-09 09:32:40.827875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.310 [2024-12-09 09:32:40.827887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.310 [2024-12-09 09:32:40.831508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.310 [2024-12-09 09:32:40.831542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.310 [2024-12-09 09:32:40.831555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.310 [2024-12-09 09:32:40.835226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.310 [2024-12-09 09:32:40.835263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.310 [2024-12-09 09:32:40.835275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.310 [2024-12-09 09:32:40.838976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.310 [2024-12-09 09:32:40.839021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.310 [2024-12-09 09:32:40.839033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.310 [2024-12-09 09:32:40.842725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.310 [2024-12-09 09:32:40.842762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.310 [2024-12-09 09:32:40.842774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.310 [2024-12-09 09:32:40.846401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.310 [2024-12-09 09:32:40.846435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.310 [2024-12-09 09:32:40.846446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.310 [2024-12-09 09:32:40.850120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.310 [2024-12-09 09:32:40.850154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.311 [2024-12-09 09:32:40.850166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.311 [2024-12-09 09:32:40.853822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.311 [2024-12-09 09:32:40.853855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.311 [2024-12-09 09:32:40.853867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.311 [2024-12-09 09:32:40.857536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.311 [2024-12-09 09:32:40.857569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.311 [2024-12-09 09:32:40.857581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.311 [2024-12-09 09:32:40.861253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.311 [2024-12-09 09:32:40.861288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.311 [2024-12-09 09:32:40.861301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.311 [2024-12-09 09:32:40.864969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.311 [2024-12-09 09:32:40.865003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.311 [2024-12-09 09:32:40.865015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.311 [2024-12-09 09:32:40.868682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.311 [2024-12-09 09:32:40.868717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.311 [2024-12-09 09:32:40.868729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.311 [2024-12-09 09:32:40.872392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.311 [2024-12-09 09:32:40.872427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.311 [2024-12-09 09:32:40.872439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.311 [2024-12-09 09:32:40.876054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.311 [2024-12-09 09:32:40.876091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.311 [2024-12-09 09:32:40.876104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.311 [2024-12-09 09:32:40.879815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.311 [2024-12-09 09:32:40.879852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.311 [2024-12-09 09:32:40.879865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.311 [2024-12-09 09:32:40.883612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.311 [2024-12-09 09:32:40.883649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.311 [2024-12-09 09:32:40.883661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.311 [2024-12-09 09:32:40.887324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.311 [2024-12-09 09:32:40.887360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.311 [2024-12-09 09:32:40.887373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.311 [2024-12-09 09:32:40.891040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.311 [2024-12-09 09:32:40.891076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.311 [2024-12-09 09:32:40.891088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.311 [2024-12-09 09:32:40.894784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.311 [2024-12-09 09:32:40.894820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.311 [2024-12-09 09:32:40.894832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.311 [2024-12-09 09:32:40.898451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.311 [2024-12-09 09:32:40.898497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.311 [2024-12-09 09:32:40.898509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.311 [2024-12-09 09:32:40.902187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.311 [2024-12-09 09:32:40.902221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.311 [2024-12-09 09:32:40.902233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.311 [2024-12-09 09:32:40.905948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.311 [2024-12-09 09:32:40.905983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.311 [2024-12-09 09:32:40.906010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.311 [2024-12-09 09:32:40.909761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.311 [2024-12-09 09:32:40.909796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.311 [2024-12-09 09:32:40.909808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.311 [2024-12-09 09:32:40.913454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.311 [2024-12-09 09:32:40.913500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.311 [2024-12-09 09:32:40.913512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.311 [2024-12-09 09:32:40.917130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.311 [2024-12-09 09:32:40.917164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.311 [2024-12-09 09:32:40.917191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.311 [2024-12-09 09:32:40.920832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.311 [2024-12-09 09:32:40.920865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.311 [2024-12-09 09:32:40.920893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.311 [2024-12-09 09:32:40.924566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.311 [2024-12-09 09:32:40.924598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.311 [2024-12-09 09:32:40.924625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.311 [2024-12-09 09:32:40.928281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.311 [2024-12-09 09:32:40.928317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.311 [2024-12-09 09:32:40.928329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.311 [2024-12-09 09:32:40.931998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.311 [2024-12-09 09:32:40.932034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.311 [2024-12-09 09:32:40.932045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.311 [2024-12-09 09:32:40.935763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.311 [2024-12-09 09:32:40.935798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.311 [2024-12-09 09:32:40.935809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.311 [2024-12-09 09:32:40.939447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.311 [2024-12-09 09:32:40.939494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.311 [2024-12-09 09:32:40.939505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.311 [2024-12-09 09:32:40.943152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.311 [2024-12-09 09:32:40.943187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.311 [2024-12-09 09:32:40.943198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.311 [2024-12-09 09:32:40.946912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.311 [2024-12-09 09:32:40.946947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.311 [2024-12-09 09:32:40.946958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.311 [2024-12-09 09:32:40.950683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.312 [2024-12-09 09:32:40.950718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.312 [2024-12-09 09:32:40.950730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.312 [2024-12-09 09:32:40.954392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.312 [2024-12-09 09:32:40.954427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.312 [2024-12-09 09:32:40.954439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.312 [2024-12-09 09:32:40.958088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.312 [2024-12-09 09:32:40.958121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.312 [2024-12-09 09:32:40.958133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.312 [2024-12-09 09:32:40.961822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.312 [2024-12-09 09:32:40.961855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.312 [2024-12-09 09:32:40.961886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.312 [2024-12-09 09:32:40.965575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.312 [2024-12-09 09:32:40.965608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.312 [2024-12-09 09:32:40.965635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.312 [2024-12-09 09:32:40.969330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.312 [2024-12-09 09:32:40.969363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.312 [2024-12-09 09:32:40.969390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.312 [2024-12-09 09:32:40.973136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.312 [2024-12-09 09:32:40.973170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.312 [2024-12-09 09:32:40.973182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.312 [2024-12-09 09:32:40.976897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.312 [2024-12-09 09:32:40.976931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.312 [2024-12-09 09:32:40.976943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.312 [2024-12-09 09:32:40.980620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.312 [2024-12-09 09:32:40.980655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.312 [2024-12-09 09:32:40.980667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.312 [2024-12-09 09:32:40.984364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.312 [2024-12-09 09:32:40.984400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.312 [2024-12-09 09:32:40.984412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.312 [2024-12-09 09:32:40.988099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.312 [2024-12-09 09:32:40.988135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.312 [2024-12-09 09:32:40.988146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.312 [2024-12-09 09:32:40.991827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.312 [2024-12-09 09:32:40.991864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.312 [2024-12-09 09:32:40.991892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.312 [2024-12-09 09:32:40.995569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.312 [2024-12-09 09:32:40.995603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.312 [2024-12-09 09:32:40.995630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.312 [2024-12-09 09:32:40.999292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.312 [2024-12-09 09:32:40.999326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.312 [2024-12-09 09:32:40.999353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.312 [2024-12-09 09:32:41.003021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.312 [2024-12-09 09:32:41.003056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.312 [2024-12-09 09:32:41.003068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.312 [2024-12-09 09:32:41.006747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.312 [2024-12-09 09:32:41.006782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.312 [2024-12-09 09:32:41.006794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.312 [2024-12-09 09:32:41.010427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.312 [2024-12-09 09:32:41.010475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.312 [2024-12-09 09:32:41.010487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.312 [2024-12-09 09:32:41.014174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.312 [2024-12-09 09:32:41.014208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.312 [2024-12-09 09:32:41.014220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.312 [2024-12-09 09:32:41.017873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.312 [2024-12-09 09:32:41.017906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.312 [2024-12-09 09:32:41.017918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.312 [2024-12-09 09:32:41.021598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.312 [2024-12-09 09:32:41.021632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.312 [2024-12-09 09:32:41.021659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.312 [2024-12-09 09:32:41.025306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.312 [2024-12-09 09:32:41.025339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.312 [2024-12-09 09:32:41.025351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.312 [2024-12-09 09:32:41.028981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.312 [2024-12-09 09:32:41.029015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.312 [2024-12-09 09:32:41.029026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.574 [2024-12-09 09:32:41.032685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.575 [2024-12-09 09:32:41.032718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.575 [2024-12-09 09:32:41.032746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.575 [2024-12-09 09:32:41.036370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.575 [2024-12-09 09:32:41.036405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.575 [2024-12-09 09:32:41.036417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.575 [2024-12-09 09:32:41.040074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.575 [2024-12-09 09:32:41.040111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.575 [2024-12-09 09:32:41.040122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.575 [2024-12-09 09:32:41.043775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.575 [2024-12-09 09:32:41.043808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.575 [2024-12-09 09:32:41.043820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.575 [2024-12-09 09:32:41.047480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.575 [2024-12-09 09:32:41.047514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.575 [2024-12-09 09:32:41.047525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.575 [2024-12-09 09:32:41.051187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.575 [2024-12-09 09:32:41.051223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.575 [2024-12-09 09:32:41.051235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.575 [2024-12-09 09:32:41.054922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.575 [2024-12-09 09:32:41.054958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.575 [2024-12-09 09:32:41.054969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.575 [2024-12-09 09:32:41.058629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.575 [2024-12-09 09:32:41.058664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.575 [2024-12-09 09:32:41.058676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.575 [2024-12-09 09:32:41.062276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.575 [2024-12-09 09:32:41.062311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.575 [2024-12-09 09:32:41.062323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.575 [2024-12-09 09:32:41.066029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.575 [2024-12-09 09:32:41.066070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.575 [2024-12-09 09:32:41.066082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.575 [2024-12-09 09:32:41.069807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.575 [2024-12-09 09:32:41.069840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.575 [2024-12-09 09:32:41.069867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.575 [2024-12-09 09:32:41.073523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.575 [2024-12-09 09:32:41.073556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.575 [2024-12-09 09:32:41.073568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.575 [2024-12-09 09:32:41.077192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.575 [2024-12-09 09:32:41.077226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.575 [2024-12-09 09:32:41.077238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.575 [2024-12-09 09:32:41.080906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.575 [2024-12-09 09:32:41.080940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.575 [2024-12-09 09:32:41.080952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.575 [2024-12-09 09:32:41.084609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.575 [2024-12-09 09:32:41.084642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.575 [2024-12-09 09:32:41.084653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.575 [2024-12-09 09:32:41.088266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.575 [2024-12-09 09:32:41.088301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.575 [2024-12-09 09:32:41.088328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.575 [2024-12-09 09:32:41.091988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.575 [2024-12-09 09:32:41.092021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.575 [2024-12-09 09:32:41.092048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.575 [2024-12-09 09:32:41.095695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.575 [2024-12-09 09:32:41.095730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.575 [2024-12-09 09:32:41.095741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.575 [2024-12-09 09:32:41.099437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.575 [2024-12-09 09:32:41.099481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.575 [2024-12-09 09:32:41.099492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.575 [2024-12-09 09:32:41.103132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.575 [2024-12-09 09:32:41.103179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.575 [2024-12-09 09:32:41.103191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.575 [2024-12-09 09:32:41.106845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.575 [2024-12-09 09:32:41.106880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.575 [2024-12-09 09:32:41.106908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.575 [2024-12-09 09:32:41.110554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.575 [2024-12-09 09:32:41.110588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.575 [2024-12-09 09:32:41.110616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.575 [2024-12-09 09:32:41.114164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.575 [2024-12-09 09:32:41.114197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.575 [2024-12-09 09:32:41.114225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.575 [2024-12-09 09:32:41.117867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.575 [2024-12-09 09:32:41.117903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.575 [2024-12-09 09:32:41.117914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.575 [2024-12-09 09:32:41.121555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.575 [2024-12-09 09:32:41.121590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.575 [2024-12-09 09:32:41.121602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.575 [2024-12-09 09:32:41.125246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.575 [2024-12-09 09:32:41.125280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.575 [2024-12-09 09:32:41.125291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.575 [2024-12-09 09:32:41.128958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.575 [2024-12-09 09:32:41.128992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.576 [2024-12-09 09:32:41.129004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.576 [2024-12-09 09:32:41.132674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.576 [2024-12-09 09:32:41.132708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.576 [2024-12-09 09:32:41.132720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.576 [2024-12-09 09:32:41.136395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.576 [2024-12-09 09:32:41.136431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.576 [2024-12-09 09:32:41.136442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.576 [2024-12-09 09:32:41.140140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.576 [2024-12-09 09:32:41.140174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.576 [2024-12-09 09:32:41.140186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.576 [2024-12-09 09:32:41.143848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.576 [2024-12-09 09:32:41.143881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.576 [2024-12-09 09:32:41.143893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.576 [2024-12-09 09:32:41.147501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.576 [2024-12-09 09:32:41.147534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.576 [2024-12-09 09:32:41.147546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.576 [2024-12-09 09:32:41.151188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.576 [2024-12-09 09:32:41.151226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.576 [2024-12-09 09:32:41.151238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.576 [2024-12-09 09:32:41.154871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.576 [2024-12-09 09:32:41.154906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.576 [2024-12-09 09:32:41.154918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.576 [2024-12-09 09:32:41.158580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.576 [2024-12-09 09:32:41.158616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.576 [2024-12-09 09:32:41.158627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.576 [2024-12-09 09:32:41.162306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.576 [2024-12-09 09:32:41.162345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.576 [2024-12-09 09:32:41.162357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.576 [2024-12-09 09:32:41.166013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.576 [2024-12-09 09:32:41.166054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.576 [2024-12-09 09:32:41.166083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.576 [2024-12-09 09:32:41.169677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.576 [2024-12-09 09:32:41.169711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.576 [2024-12-09 09:32:41.169722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.576 [2024-12-09 09:32:41.173332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.576 [2024-12-09 09:32:41.173365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.576 [2024-12-09 09:32:41.173377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.576 [2024-12-09 09:32:41.177049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.576 [2024-12-09 09:32:41.177082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.576 [2024-12-09 09:32:41.177093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.576 [2024-12-09 09:32:41.180758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.576 [2024-12-09 09:32:41.180793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.576 [2024-12-09 09:32:41.180804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.576 [2024-12-09 09:32:41.184456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.576 [2024-12-09 09:32:41.184498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.576 [2024-12-09 09:32:41.184526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.576 [2024-12-09 09:32:41.188254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.576 [2024-12-09 09:32:41.188287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.576 [2024-12-09 09:32:41.188299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.576 [2024-12-09 09:32:41.191995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.576 [2024-12-09 09:32:41.192031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.576 [2024-12-09 09:32:41.192058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.576 [2024-12-09 09:32:41.195766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.576 [2024-12-09 09:32:41.195801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.576 [2024-12-09 09:32:41.195828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.576 [2024-12-09 09:32:41.199533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.576 [2024-12-09 09:32:41.199565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.576 [2024-12-09 09:32:41.199592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.576 [2024-12-09 09:32:41.203215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.576 [2024-12-09 09:32:41.203250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.576 [2024-12-09 09:32:41.203278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.576 [2024-12-09 09:32:41.206962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.576 [2024-12-09 09:32:41.206997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.576 [2024-12-09 09:32:41.207025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.576 [2024-12-09 09:32:41.210681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.576 [2024-12-09 09:32:41.210715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.576 [2024-12-09 09:32:41.210743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.576 [2024-12-09 09:32:41.214433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.576 [2024-12-09 09:32:41.214479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.576 [2024-12-09 09:32:41.214492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.576 [2024-12-09 09:32:41.218186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.576 [2024-12-09 09:32:41.218221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.576 [2024-12-09 09:32:41.218232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.576 [2024-12-09 09:32:41.221904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.576 [2024-12-09 09:32:41.221938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.576 [2024-12-09 09:32:41.221949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.576 [2024-12-09 09:32:41.225651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.576 [2024-12-09 09:32:41.225684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.576 [2024-12-09 09:32:41.225696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.576 [2024-12-09 09:32:41.229378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.577 [2024-12-09 09:32:41.229411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.577 [2024-12-09 09:32:41.229423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.577 [2024-12-09 09:32:41.233082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.577 [2024-12-09 09:32:41.233116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.577 [2024-12-09 09:32:41.233143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.577 [2024-12-09 09:32:41.236904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.577 [2024-12-09 09:32:41.236939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.577 [2024-12-09 09:32:41.236951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.577 [2024-12-09 09:32:41.240593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.577 [2024-12-09 09:32:41.240626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.577 [2024-12-09 09:32:41.240654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.577 [2024-12-09 09:32:41.244320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.577 [2024-12-09 09:32:41.244356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.577 [2024-12-09 09:32:41.244367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.577 [2024-12-09 09:32:41.248012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.577 [2024-12-09 09:32:41.248046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.577 [2024-12-09 09:32:41.248074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.577 [2024-12-09 09:32:41.251777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.577 [2024-12-09 09:32:41.251812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.577 [2024-12-09 09:32:41.251839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.577 [2024-12-09 09:32:41.255611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.577 [2024-12-09 09:32:41.255658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.577 [2024-12-09 09:32:41.255669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.577 [2024-12-09 09:32:41.259399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.577 [2024-12-09 09:32:41.259435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.577 [2024-12-09 09:32:41.259447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.577 [2024-12-09 09:32:41.263106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.577 [2024-12-09 09:32:41.263141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.577 [2024-12-09 09:32:41.263153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.577 [2024-12-09 09:32:41.266909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.577 [2024-12-09 09:32:41.266944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.577 [2024-12-09 09:32:41.266956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.577 [2024-12-09 09:32:41.270690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.577 [2024-12-09 09:32:41.270726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.577 [2024-12-09 09:32:41.270738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.577 [2024-12-09 09:32:41.274412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.577 [2024-12-09 09:32:41.274448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.577 [2024-12-09 09:32:41.274478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.577 [2024-12-09 09:32:41.278114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.577 [2024-12-09 09:32:41.278147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.577 [2024-12-09 09:32:41.278159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.577 [2024-12-09 09:32:41.281777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.577 [2024-12-09 09:32:41.281809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.577 [2024-12-09 09:32:41.281820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.577 [2024-12-09 09:32:41.285437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.577 [2024-12-09 09:32:41.285497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.577 [2024-12-09 09:32:41.285509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.577 [2024-12-09 09:32:41.289157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.577 [2024-12-09 09:32:41.289193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.577 [2024-12-09 09:32:41.289221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.577 [2024-12-09 09:32:41.292901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.577 [2024-12-09 09:32:41.292932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.577 [2024-12-09 09:32:41.292944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.838 [2024-12-09 09:32:41.296594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.838 [2024-12-09 09:32:41.296627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.838 [2024-12-09 09:32:41.296638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.838 [2024-12-09 09:32:41.300307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.838 [2024-12-09 09:32:41.300343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.838 [2024-12-09 09:32:41.300355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.838 [2024-12-09 09:32:41.303966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.838 [2024-12-09 09:32:41.304001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.838 [2024-12-09 09:32:41.304012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.838 [2024-12-09 09:32:41.307661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.838 [2024-12-09 09:32:41.307696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.839 [2024-12-09 09:32:41.307707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.839 [2024-12-09 09:32:41.311345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.839 [2024-12-09 09:32:41.311381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.839 [2024-12-09 09:32:41.311408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.839 [2024-12-09 09:32:41.315098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.839 [2024-12-09 09:32:41.315131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.839 [2024-12-09 09:32:41.315158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.839 [2024-12-09 09:32:41.318822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.839 [2024-12-09 09:32:41.318859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.839 [2024-12-09 09:32:41.318871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.839 [2024-12-09 09:32:41.322593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.839 [2024-12-09 09:32:41.322629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.839 [2024-12-09 09:32:41.322641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.839 [2024-12-09 09:32:41.326307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.839 [2024-12-09 09:32:41.326343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.839 [2024-12-09 09:32:41.326354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.839 [2024-12-09 09:32:41.329997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.839 [2024-12-09 09:32:41.330031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.839 [2024-12-09 09:32:41.330049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.839 [2024-12-09 09:32:41.333719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.839 [2024-12-09 09:32:41.333753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.839 [2024-12-09 09:32:41.333764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.839 [2024-12-09 09:32:41.337431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.839 [2024-12-09 09:32:41.337475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.839 [2024-12-09 09:32:41.337487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.839 [2024-12-09 09:32:41.341150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.839 [2024-12-09 09:32:41.341183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.839 [2024-12-09 09:32:41.341194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.839 [2024-12-09 09:32:41.344806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.839 [2024-12-09 09:32:41.344839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.839 [2024-12-09 09:32:41.344850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.839 [2024-12-09 09:32:41.348483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.839 [2024-12-09 09:32:41.348514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.839 [2024-12-09 09:32:41.348525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.839 [2024-12-09 09:32:41.352152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.839 [2024-12-09 09:32:41.352187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.839 [2024-12-09 09:32:41.352199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.839 [2024-12-09 09:32:41.355862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.839 [2024-12-09 09:32:41.355897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.839 [2024-12-09 09:32:41.355909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.839 [2024-12-09 09:32:41.359574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.839 [2024-12-09 09:32:41.359609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.839 [2024-12-09 09:32:41.359621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.839 [2024-12-09 09:32:41.363265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.839 [2024-12-09 09:32:41.363301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.839 [2024-12-09 09:32:41.363312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.839 [2024-12-09 09:32:41.366968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.839 [2024-12-09 09:32:41.367004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.839 [2024-12-09 09:32:41.367015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.839 [2024-12-09 09:32:41.370646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.839 [2024-12-09 09:32:41.370681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.839 [2024-12-09 09:32:41.370693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.839 [2024-12-09 09:32:41.374356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.839 [2024-12-09 09:32:41.374392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.839 [2024-12-09 09:32:41.374404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.839 [2024-12-09 09:32:41.378088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.839 [2024-12-09 09:32:41.378120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.839 [2024-12-09 09:32:41.378131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.839 [2024-12-09 09:32:41.381786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.839 [2024-12-09 09:32:41.381820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.839 [2024-12-09 09:32:41.381831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.839 [2024-12-09 09:32:41.385527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.839 [2024-12-09 09:32:41.385559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.839 [2024-12-09 09:32:41.385571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.839 [2024-12-09 09:32:41.389161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.839 [2024-12-09 09:32:41.389194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.839 [2024-12-09 09:32:41.389221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.839 [2024-12-09 09:32:41.392837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.839 [2024-12-09 09:32:41.392870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.839 [2024-12-09 09:32:41.392882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.839 [2024-12-09 09:32:41.396590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.839 [2024-12-09 09:32:41.396624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.839 [2024-12-09 09:32:41.396635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.839 [2024-12-09 09:32:41.400271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.839 [2024-12-09 09:32:41.400306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.839 [2024-12-09 09:32:41.400318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.839 [2024-12-09 09:32:41.403986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.839 [2024-12-09 09:32:41.404021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.840 [2024-12-09 09:32:41.404048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.840 [2024-12-09 09:32:41.407751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.840 [2024-12-09 09:32:41.407787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.840 [2024-12-09 09:32:41.407799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.840 [2024-12-09 09:32:41.411487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.840 [2024-12-09 09:32:41.411521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.840 [2024-12-09 09:32:41.411532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.840 [2024-12-09 09:32:41.415218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.840 [2024-12-09 09:32:41.415255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.840 [2024-12-09 09:32:41.415282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.840 [2024-12-09 09:32:41.418963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.840 [2024-12-09 09:32:41.419000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.840 [2024-12-09 09:32:41.419011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.840 [2024-12-09 09:32:41.422698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.840 [2024-12-09 09:32:41.422734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.840 [2024-12-09 09:32:41.422762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.840 [2024-12-09 09:32:41.426418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.840 [2024-12-09 09:32:41.426452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.840 [2024-12-09 09:32:41.426478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.840 [2024-12-09 09:32:41.430064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.840 [2024-12-09 09:32:41.430095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.840 [2024-12-09 09:32:41.430106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.840 [2024-12-09 09:32:41.433770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.840 [2024-12-09 09:32:41.433803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.840 [2024-12-09 09:32:41.433815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.840 [2024-12-09 09:32:41.437440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.840 [2024-12-09 09:32:41.437483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.840 [2024-12-09 09:32:41.437513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.840 [2024-12-09 09:32:41.441127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.840 [2024-12-09 09:32:41.441161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.840 [2024-12-09 09:32:41.441173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.840 [2024-12-09 09:32:41.444839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.840 [2024-12-09 09:32:41.444872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.840 [2024-12-09 09:32:41.444883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.840 [2024-12-09 09:32:41.448534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.840 [2024-12-09 09:32:41.448566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.840 [2024-12-09 09:32:41.448594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.840 [2024-12-09 09:32:41.452274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.840 [2024-12-09 09:32:41.452319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.840 [2024-12-09 09:32:41.452331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.840 [2024-12-09 09:32:41.456020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.840 [2024-12-09 09:32:41.456056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.840 [2024-12-09 09:32:41.456083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.840 [2024-12-09 09:32:41.459794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.840 [2024-12-09 09:32:41.459829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.840 [2024-12-09 09:32:41.459841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.840 [2024-12-09 09:32:41.463518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.840 [2024-12-09 09:32:41.463549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.840 [2024-12-09 09:32:41.463560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.840 [2024-12-09 09:32:41.467240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.840 [2024-12-09 09:32:41.467275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.840 [2024-12-09 09:32:41.467287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.840 [2024-12-09 09:32:41.470990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.840 [2024-12-09 09:32:41.471026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.840 [2024-12-09 09:32:41.471038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.840 [2024-12-09 09:32:41.474709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.840 [2024-12-09 09:32:41.474745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.840 [2024-12-09 09:32:41.474757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.840 [2024-12-09 09:32:41.478378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.840 [2024-12-09 09:32:41.478414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.840 [2024-12-09 09:32:41.478426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.840 [2024-12-09 09:32:41.482064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.840 [2024-12-09 09:32:41.482095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.840 [2024-12-09 09:32:41.482107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.840 [2024-12-09 09:32:41.485806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.840 [2024-12-09 09:32:41.485841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.840 [2024-12-09 09:32:41.485852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.840 [2024-12-09 09:32:41.489525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.840 [2024-12-09 09:32:41.489558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.840 [2024-12-09 09:32:41.489570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.840 [2024-12-09 09:32:41.493196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.840 [2024-12-09 09:32:41.493230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.840 [2024-12-09 09:32:41.493242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.840 [2024-12-09 09:32:41.496903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.840 [2024-12-09 09:32:41.496937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.840 [2024-12-09 09:32:41.496949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.840 [2024-12-09 09:32:41.500556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.840 [2024-12-09 09:32:41.500591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.840 [2024-12-09 09:32:41.500602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.840 [2024-12-09 09:32:41.504277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.841 [2024-12-09 09:32:41.504312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.841 [2024-12-09 09:32:41.504324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.841 [2024-12-09 09:32:41.508020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.841 [2024-12-09 09:32:41.508055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.841 [2024-12-09 09:32:41.508066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.841 [2024-12-09 09:32:41.511729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.841 [2024-12-09 09:32:41.511765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.841 [2024-12-09 09:32:41.511777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:03.841 [2024-12-09 09:32:41.515389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.841 [2024-12-09 09:32:41.515424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.841 [2024-12-09 09:32:41.515436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:03.841 8300.00 IOPS, 1037.50 MiB/s [2024-12-09T09:32:41.564Z] [2024-12-09 09:32:41.520358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1701620) 00:22:03.841 [2024-12-09 09:32:41.520393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.841 [2024-12-09 09:32:41.520405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:03.841 00:22:03.841 Latency(us) 00:22:03.841 [2024-12-09T09:32:41.564Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:03.841 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:22:03.841 nvme0n1 : 2.00 8301.84 1037.73 0.00 0.00 1924.42 1750.26 6290.40 00:22:03.841 [2024-12-09T09:32:41.564Z] =================================================================================================================== 00:22:03.841 [2024-12-09T09:32:41.564Z] Total : 8301.84 1037.73 0.00 0.00 1924.42 1750.26 6290.40 00:22:03.841 { 00:22:03.841 "results": [ 00:22:03.841 { 00:22:03.841 "job": "nvme0n1", 00:22:03.841 "core_mask": "0x2", 00:22:03.841 "workload": "randread", 00:22:03.841 "status": "finished", 00:22:03.841 "queue_depth": 16, 00:22:03.841 "io_size": 131072, 00:22:03.841 "runtime": 2.003412, 00:22:03.841 "iops": 8301.837065965463, 00:22:03.841 "mibps": 1037.7296332456829, 00:22:03.841 "io_failed": 0, 00:22:03.841 "io_timeout": 0, 00:22:03.841 "avg_latency_us": 1924.4227627199514, 00:22:03.841 "min_latency_us": 1750.2586345381526, 00:22:03.841 "max_latency_us": 6290.403212851405 00:22:03.841 } 00:22:03.841 ], 00:22:03.841 "core_count": 1 00:22:03.841 } 00:22:03.841 09:32:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:03.841 09:32:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:03.841 | .driver_specific 00:22:03.841 | .nvme_error 00:22:03.841 | .status_code 00:22:03.841 | .command_transient_transport_error' 00:22:03.841 09:32:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:03.841 09:32:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:04.100 09:32:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 537 > 0 )) 00:22:04.100 09:32:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80065 00:22:04.100 09:32:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80065 ']' 00:22:04.100 09:32:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80065 00:22:04.100 09:32:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:22:04.100 09:32:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:04.100 09:32:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80065 00:22:04.100 09:32:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:04.100 09:32:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:04.100 killing process with pid 80065 00:22:04.100 09:32:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80065' 00:22:04.100 Received shutdown signal, test time was about 2.000000 seconds 00:22:04.100 00:22:04.100 Latency(us) 00:22:04.100 [2024-12-09T09:32:41.823Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:04.100 [2024-12-09T09:32:41.823Z] =================================================================================================================== 00:22:04.100 [2024-12-09T09:32:41.823Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:04.100 09:32:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80065 00:22:04.100 09:32:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80065 00:22:04.359 09:32:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:22:04.359 09:32:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:22:04.359 09:32:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:22:04.359 09:32:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:22:04.359 09:32:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:22:04.359 09:32:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80125 00:22:04.359 09:32:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80125 /var/tmp/bperf.sock 00:22:04.359 09:32:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:22:04.359 09:32:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80125 ']' 00:22:04.359 09:32:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:04.359 09:32:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:04.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:04.359 09:32:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:04.359 09:32:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:04.359 09:32:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:04.359 [2024-12-09 09:32:42.005888] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:22:04.359 [2024-12-09 09:32:42.005955] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80125 ] 00:22:04.618 [2024-12-09 09:32:42.141195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:04.618 [2024-12-09 09:32:42.190504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:04.618 [2024-12-09 09:32:42.233118] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:05.186 09:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:05.186 09:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:22:05.186 09:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:05.186 09:32:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:05.445 09:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:05.445 09:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.445 09:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:05.445 09:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.445 09:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:05.445 09:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:05.704 nvme0n1 00:22:05.704 09:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:22:05.704 09:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.704 09:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:05.704 09:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.704 09:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:05.704 09:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:05.963 Running I/O for 2 seconds... 00:22:05.963 [2024-12-09 09:32:43.504212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016efb048 00:22:05.963 [2024-12-09 09:32:43.505352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:24978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.963 [2024-12-09 09:32:43.505388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:05.963 [2024-12-09 09:32:43.516527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016efb8b8 00:22:05.963 [2024-12-09 09:32:43.517627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.963 [2024-12-09 09:32:43.517659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.963 [2024-12-09 09:32:43.528735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016efc128 00:22:05.963 [2024-12-09 09:32:43.529816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:3142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.963 [2024-12-09 09:32:43.529849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:05.963 [2024-12-09 09:32:43.540875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016efc998 00:22:05.963 [2024-12-09 09:32:43.541940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.963 [2024-12-09 09:32:43.541973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:05.963 [2024-12-09 09:32:43.553101] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016efd208 00:22:05.963 [2024-12-09 09:32:43.554169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.963 [2024-12-09 09:32:43.554202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:05.963 [2024-12-09 09:32:43.565359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016efda78 00:22:05.963 [2024-12-09 09:32:43.566407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.963 [2024-12-09 09:32:43.566441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:05.963 [2024-12-09 09:32:43.577511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016efe2e8 00:22:05.963 [2024-12-09 09:32:43.578538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.963 [2024-12-09 09:32:43.578571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:05.963 [2024-12-09 09:32:43.589715] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016efeb58 00:22:05.963 [2024-12-09 09:32:43.590737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.963 [2024-12-09 09:32:43.590768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:05.963 [2024-12-09 09:32:43.606957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016efef90 00:22:05.963 [2024-12-09 09:32:43.608937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.963 [2024-12-09 09:32:43.608967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:05.963 [2024-12-09 09:32:43.619141] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016efeb58 00:22:05.963 [2024-12-09 09:32:43.621093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.963 [2024-12-09 09:32:43.621126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:05.963 [2024-12-09 09:32:43.631480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016efe2e8 00:22:05.963 [2024-12-09 09:32:43.633396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.963 [2024-12-09 09:32:43.633424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:05.963 [2024-12-09 09:32:43.643808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016efda78 00:22:05.963 [2024-12-09 09:32:43.645730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.963 [2024-12-09 09:32:43.645759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:05.963 [2024-12-09 09:32:43.656073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016efd208 00:22:05.963 [2024-12-09 09:32:43.657997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.963 [2024-12-09 09:32:43.658029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:05.963 [2024-12-09 09:32:43.668317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016efc998 00:22:05.963 [2024-12-09 09:32:43.670223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.963 [2024-12-09 09:32:43.670256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:05.963 [2024-12-09 09:32:43.680542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016efc128 00:22:05.963 [2024-12-09 09:32:43.682409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.963 [2024-12-09 09:32:43.682441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:06.221 [2024-12-09 09:32:43.692732] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016efb8b8 00:22:06.222 [2024-12-09 09:32:43.694607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.222 [2024-12-09 09:32:43.694637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:06.222 [2024-12-09 09:32:43.704938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016efb048 00:22:06.222 [2024-12-09 09:32:43.706800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.222 [2024-12-09 09:32:43.706830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.222 [2024-12-09 09:32:43.717119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016efa7d8 00:22:06.222 [2024-12-09 09:32:43.718988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:9528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.222 [2024-12-09 09:32:43.719018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:06.222 [2024-12-09 09:32:43.729349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ef9f68 00:22:06.222 [2024-12-09 09:32:43.731179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:18372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.222 [2024-12-09 09:32:43.731210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:06.222 [2024-12-09 09:32:43.741626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ef96f8 00:22:06.222 [2024-12-09 09:32:43.743428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.222 [2024-12-09 09:32:43.743469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:06.222 [2024-12-09 09:32:43.753994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ef8e88 00:22:06.222 [2024-12-09 09:32:43.755794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.222 [2024-12-09 09:32:43.755829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:06.222 [2024-12-09 09:32:43.766441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ef8618 00:22:06.222 [2024-12-09 09:32:43.768208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.222 [2024-12-09 09:32:43.768239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:06.222 [2024-12-09 09:32:43.778769] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ef7da8 00:22:06.222 [2024-12-09 09:32:43.780551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.222 [2024-12-09 09:32:43.780581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:06.222 [2024-12-09 09:32:43.791155] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ef7538 00:22:06.222 [2024-12-09 09:32:43.792883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.222 [2024-12-09 09:32:43.792913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:06.222 [2024-12-09 09:32:43.803540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ef6cc8 00:22:06.222 [2024-12-09 09:32:43.805246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.222 [2024-12-09 09:32:43.805275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:06.222 [2024-12-09 09:32:43.815891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ef6458 00:22:06.222 [2024-12-09 09:32:43.817595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.222 [2024-12-09 09:32:43.817625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:06.222 [2024-12-09 09:32:43.828222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ef5be8 00:22:06.222 [2024-12-09 09:32:43.829915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.222 [2024-12-09 09:32:43.829945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:06.222 [2024-12-09 09:32:43.840551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ef5378 00:22:06.222 [2024-12-09 09:32:43.842223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:18971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.222 [2024-12-09 09:32:43.842252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:06.222 [2024-12-09 09:32:43.852877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ef4b08 00:22:06.222 [2024-12-09 09:32:43.854545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.222 [2024-12-09 09:32:43.854575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:06.222 [2024-12-09 09:32:43.865203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ef4298 00:22:06.222 [2024-12-09 09:32:43.866861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.222 [2024-12-09 09:32:43.866892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:06.222 [2024-12-09 09:32:43.877548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ef3a28 00:22:06.222 [2024-12-09 09:32:43.879174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.222 [2024-12-09 09:32:43.879206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:06.222 [2024-12-09 09:32:43.889900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ef31b8 00:22:06.222 [2024-12-09 09:32:43.891534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.222 [2024-12-09 09:32:43.891565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:06.222 [2024-12-09 09:32:43.902337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ef2948 00:22:06.222 [2024-12-09 09:32:43.903928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:11630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.222 [2024-12-09 09:32:43.903958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.222 [2024-12-09 09:32:43.914659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ef20d8 00:22:06.222 [2024-12-09 09:32:43.916230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.222 [2024-12-09 09:32:43.916261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:06.222 [2024-12-09 09:32:43.926979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ef1868 00:22:06.222 [2024-12-09 09:32:43.928548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.222 [2024-12-09 09:32:43.928578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:06.222 [2024-12-09 09:32:43.939348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ef0ff8 00:22:06.222 [2024-12-09 09:32:43.940897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.222 [2024-12-09 09:32:43.940928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:06.480 [2024-12-09 09:32:43.951701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ef0788 00:22:06.480 [2024-12-09 09:32:43.953268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:17832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.480 [2024-12-09 09:32:43.953305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:06.480 [2024-12-09 09:32:43.964112] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016eeff18 00:22:06.480 [2024-12-09 09:32:43.965690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.480 [2024-12-09 09:32:43.965726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:06.480 [2024-12-09 09:32:43.976378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016eef6a8 00:22:06.480 [2024-12-09 09:32:43.977901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.480 [2024-12-09 09:32:43.977934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:06.480 [2024-12-09 09:32:43.988627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016eeee38 00:22:06.480 [2024-12-09 09:32:43.990130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.480 [2024-12-09 09:32:43.990162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:06.480 [2024-12-09 09:32:44.000909] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016eee5c8 00:22:06.480 [2024-12-09 09:32:44.002390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:12661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.480 [2024-12-09 09:32:44.002431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:06.480 [2024-12-09 09:32:44.013158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016eedd58 00:22:06.480 [2024-12-09 09:32:44.014647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.480 [2024-12-09 09:32:44.014679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:06.480 [2024-12-09 09:32:44.025438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016eed4e8 00:22:06.480 [2024-12-09 09:32:44.026925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.480 [2024-12-09 09:32:44.026957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:06.480 [2024-12-09 09:32:44.037685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016eecc78 00:22:06.480 [2024-12-09 09:32:44.039114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:11825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.480 [2024-12-09 09:32:44.039145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:06.480 [2024-12-09 09:32:44.049908] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016eec408 00:22:06.480 [2024-12-09 09:32:44.051328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.480 [2024-12-09 09:32:44.051359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:06.480 [2024-12-09 09:32:44.062139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016eebb98 00:22:06.480 [2024-12-09 09:32:44.063556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.480 [2024-12-09 09:32:44.063586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:06.480 [2024-12-09 09:32:44.074389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016eeb328 00:22:06.480 [2024-12-09 09:32:44.075785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.480 [2024-12-09 09:32:44.075816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:06.480 [2024-12-09 09:32:44.086638] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016eeaab8 00:22:06.480 [2024-12-09 09:32:44.087994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:15565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.480 [2024-12-09 09:32:44.088024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:06.480 [2024-12-09 09:32:44.098798] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016eea248 00:22:06.480 [2024-12-09 09:32:44.100176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.480 [2024-12-09 09:32:44.100207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.480 [2024-12-09 09:32:44.110945] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ee99d8 00:22:06.480 [2024-12-09 09:32:44.112304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.480 [2024-12-09 09:32:44.112334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:06.480 [2024-12-09 09:32:44.123272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ee9168 00:22:06.480 [2024-12-09 09:32:44.124605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.480 [2024-12-09 09:32:44.124636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:06.480 [2024-12-09 09:32:44.135403] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ee88f8 00:22:06.481 [2024-12-09 09:32:44.136702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.481 [2024-12-09 09:32:44.136733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:06.481 [2024-12-09 09:32:44.147542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ee8088 00:22:06.481 [2024-12-09 09:32:44.148826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.481 [2024-12-09 09:32:44.148856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:06.481 [2024-12-09 09:32:44.159861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ee7818 00:22:06.481 [2024-12-09 09:32:44.161128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:15152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.481 [2024-12-09 09:32:44.161159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:06.481 [2024-12-09 09:32:44.172149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ee6fa8 00:22:06.481 [2024-12-09 09:32:44.173402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.481 [2024-12-09 09:32:44.173433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:06.481 [2024-12-09 09:32:44.184503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ee6738 00:22:06.481 [2024-12-09 09:32:44.185736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.481 [2024-12-09 09:32:44.185765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:06.481 [2024-12-09 09:32:44.196828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ee5ec8 00:22:06.481 [2024-12-09 09:32:44.198057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.481 [2024-12-09 09:32:44.198087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:06.753 [2024-12-09 09:32:44.209133] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ee5658 00:22:06.753 [2024-12-09 09:32:44.210348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.753 [2024-12-09 09:32:44.210379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:06.753 [2024-12-09 09:32:44.221478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ee4de8 00:22:06.753 [2024-12-09 09:32:44.222687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.753 [2024-12-09 09:32:44.222865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:06.753 [2024-12-09 09:32:44.234073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ee4578 00:22:06.753 [2024-12-09 09:32:44.235256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.753 [2024-12-09 09:32:44.235293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:06.753 [2024-12-09 09:32:44.246485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ee3d08 00:22:06.753 [2024-12-09 09:32:44.247663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.753 [2024-12-09 09:32:44.247698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:06.753 [2024-12-09 09:32:44.258862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ee3498 00:22:06.753 [2024-12-09 09:32:44.260016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.753 [2024-12-09 09:32:44.260050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:06.753 [2024-12-09 09:32:44.271207] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ee2c28 00:22:06.753 [2024-12-09 09:32:44.272360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.753 [2024-12-09 09:32:44.272397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:06.753 [2024-12-09 09:32:44.284245] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ee23b8 00:22:06.753 [2024-12-09 09:32:44.285367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.753 [2024-12-09 09:32:44.285401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:06.753 [2024-12-09 09:32:44.296578] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ee1b48 00:22:06.753 [2024-12-09 09:32:44.297698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.753 [2024-12-09 09:32:44.297731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.753 [2024-12-09 09:32:44.308862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ee12d8 00:22:06.753 [2024-12-09 09:32:44.309951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.753 [2024-12-09 09:32:44.309985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:06.753 [2024-12-09 09:32:44.321193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ee0a68 00:22:06.753 [2024-12-09 09:32:44.322295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.753 [2024-12-09 09:32:44.322330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:06.753 [2024-12-09 09:32:44.333551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ee01f8 00:22:06.753 [2024-12-09 09:32:44.334620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.753 [2024-12-09 09:32:44.334654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:06.753 [2024-12-09 09:32:44.345988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016edf988 00:22:06.753 [2024-12-09 09:32:44.347044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.753 [2024-12-09 09:32:44.347080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:06.753 [2024-12-09 09:32:44.358378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016edf118 00:22:06.753 [2024-12-09 09:32:44.359413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.753 [2024-12-09 09:32:44.359448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:06.753 [2024-12-09 09:32:44.370733] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ede8a8 00:22:06.753 [2024-12-09 09:32:44.371747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.753 [2024-12-09 09:32:44.371781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:06.753 [2024-12-09 09:32:44.383035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ede038 00:22:06.753 [2024-12-09 09:32:44.384032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.753 [2024-12-09 09:32:44.384067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:06.753 [2024-12-09 09:32:44.400435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ede038 00:22:06.753 [2024-12-09 09:32:44.402427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.753 [2024-12-09 09:32:44.402476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:06.753 [2024-12-09 09:32:44.412796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ede8a8 00:22:06.753 [2024-12-09 09:32:44.414759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:25398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.753 [2024-12-09 09:32:44.414794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:06.753 [2024-12-09 09:32:44.425115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016edf118 00:22:06.753 [2024-12-09 09:32:44.427063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.753 [2024-12-09 09:32:44.427096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:06.753 [2024-12-09 09:32:44.437488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016edf988 00:22:06.753 [2024-12-09 09:32:44.439394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.753 [2024-12-09 09:32:44.439428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:06.753 [2024-12-09 09:32:44.449836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ee01f8 00:22:06.753 [2024-12-09 09:32:44.451735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.753 [2024-12-09 09:32:44.451767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:07.034 [2024-12-09 09:32:44.462183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ee0a68 00:22:07.034 [2024-12-09 09:32:44.464059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:24579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.035 [2024-12-09 09:32:44.464092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:07.035 [2024-12-09 09:32:44.474991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ee12d8 00:22:07.035 [2024-12-09 09:32:44.477264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.035 [2024-12-09 09:32:44.477306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:07.035 [2024-12-09 09:32:44.488320] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ee1b48 00:22:07.035 [2024-12-09 09:32:44.490235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.035 [2024-12-09 09:32:44.490275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:07.035 20495.00 IOPS, 80.06 MiB/s [2024-12-09T09:32:44.758Z] [2024-12-09 09:32:44.502123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ee23b8 00:22:07.035 [2024-12-09 09:32:44.503967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.035 [2024-12-09 09:32:44.504007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:07.035 [2024-12-09 09:32:44.514503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ee2c28 00:22:07.035 [2024-12-09 09:32:44.516309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.035 [2024-12-09 09:32:44.516345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:07.035 [2024-12-09 09:32:44.526902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ee3498 00:22:07.035 [2024-12-09 09:32:44.528710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.035 [2024-12-09 09:32:44.528744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:07.035 [2024-12-09 09:32:44.539240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ee3d08 00:22:07.035 [2024-12-09 09:32:44.541056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.035 [2024-12-09 09:32:44.541090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:07.035 [2024-12-09 09:32:44.551440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ee4578 00:22:07.035 [2024-12-09 09:32:44.553357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.035 [2024-12-09 09:32:44.553385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:07.035 [2024-12-09 09:32:44.563968] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ee4de8 00:22:07.035 [2024-12-09 09:32:44.565728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.035 [2024-12-09 09:32:44.565762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:07.035 [2024-12-09 09:32:44.576329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ee5658 00:22:07.035 [2024-12-09 09:32:44.578095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.035 [2024-12-09 09:32:44.578130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:07.035 [2024-12-09 09:32:44.588662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ee5ec8 00:22:07.035 [2024-12-09 09:32:44.590391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.035 [2024-12-09 09:32:44.590426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:07.035 [2024-12-09 09:32:44.601069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ee6738 00:22:07.035 [2024-12-09 09:32:44.602807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:17121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.035 [2024-12-09 09:32:44.602841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:07.035 [2024-12-09 09:32:44.613282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ee6fa8 00:22:07.035 [2024-12-09 09:32:44.615006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.035 [2024-12-09 09:32:44.615040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:07.035 [2024-12-09 09:32:44.625518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ee7818 00:22:07.035 [2024-12-09 09:32:44.627199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.035 [2024-12-09 09:32:44.627235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:07.035 [2024-12-09 09:32:44.637731] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ee8088 00:22:07.035 [2024-12-09 09:32:44.639401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.035 [2024-12-09 09:32:44.639435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:07.035 [2024-12-09 09:32:44.649886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ee88f8 00:22:07.035 [2024-12-09 09:32:44.651551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.035 [2024-12-09 09:32:44.651584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:07.035 [2024-12-09 09:32:44.662248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ee9168 00:22:07.035 [2024-12-09 09:32:44.663900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.035 [2024-12-09 09:32:44.663932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:07.035 [2024-12-09 09:32:44.674428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ee99d8 00:22:07.035 [2024-12-09 09:32:44.676062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.035 [2024-12-09 09:32:44.676095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:07.035 [2024-12-09 09:32:44.686846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016eea248 00:22:07.035 [2024-12-09 09:32:44.688437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:3592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.035 [2024-12-09 09:32:44.688478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:07.035 [2024-12-09 09:32:44.699247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016eeaab8 00:22:07.035 [2024-12-09 09:32:44.700851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:8003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.036 [2024-12-09 09:32:44.700884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:07.036 [2024-12-09 09:32:44.711489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016eeb328 00:22:07.036 [2024-12-09 09:32:44.713091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:18141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.036 [2024-12-09 09:32:44.713125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:07.036 [2024-12-09 09:32:44.723816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016eebb98 00:22:07.036 [2024-12-09 09:32:44.725378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.036 [2024-12-09 09:32:44.725413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:07.036 [2024-12-09 09:32:44.736091] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016eec408 00:22:07.036 [2024-12-09 09:32:44.737653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.036 [2024-12-09 09:32:44.737687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:07.036 [2024-12-09 09:32:44.748295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016eecc78 00:22:07.036 [2024-12-09 09:32:44.749845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.036 [2024-12-09 09:32:44.749878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:07.296 [2024-12-09 09:32:44.760565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016eed4e8 00:22:07.296 [2024-12-09 09:32:44.762093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.296 [2024-12-09 09:32:44.762131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:07.296 [2024-12-09 09:32:44.772751] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016eedd58 00:22:07.296 [2024-12-09 09:32:44.774254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.296 [2024-12-09 09:32:44.774290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:07.296 [2024-12-09 09:32:44.785003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016eee5c8 00:22:07.296 [2024-12-09 09:32:44.786512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.296 [2024-12-09 09:32:44.786546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:07.296 [2024-12-09 09:32:44.797211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016eeee38 00:22:07.296 [2024-12-09 09:32:44.798718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.296 [2024-12-09 09:32:44.798750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:07.296 [2024-12-09 09:32:44.809452] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016eef6a8 00:22:07.296 [2024-12-09 09:32:44.810927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.296 [2024-12-09 09:32:44.810962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:07.296 [2024-12-09 09:32:44.821801] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016eeff18 00:22:07.296 [2024-12-09 09:32:44.823240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.296 [2024-12-09 09:32:44.823275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:07.296 [2024-12-09 09:32:44.834199] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ef0788 00:22:07.296 [2024-12-09 09:32:44.835638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.296 [2024-12-09 09:32:44.835809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:07.296 [2024-12-09 09:32:44.846778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ef0ff8 00:22:07.296 [2024-12-09 09:32:44.848181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.296 [2024-12-09 09:32:44.848218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:07.296 [2024-12-09 09:32:44.859085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ef1868 00:22:07.296 [2024-12-09 09:32:44.860472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:23589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.296 [2024-12-09 09:32:44.860524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:07.296 [2024-12-09 09:32:44.871265] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ef20d8 00:22:07.296 [2024-12-09 09:32:44.872656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.296 [2024-12-09 09:32:44.872689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:07.296 [2024-12-09 09:32:44.883452] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ef2948 00:22:07.296 [2024-12-09 09:32:44.884815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.296 [2024-12-09 09:32:44.884849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:07.296 [2024-12-09 09:32:44.895620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ef31b8 00:22:07.296 [2024-12-09 09:32:44.896998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.296 [2024-12-09 09:32:44.897033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:07.296 [2024-12-09 09:32:44.907881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ef3a28 00:22:07.296 [2024-12-09 09:32:44.909353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.296 [2024-12-09 09:32:44.909382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:07.296 [2024-12-09 09:32:44.920224] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ef4298 00:22:07.296 [2024-12-09 09:32:44.921604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.296 [2024-12-09 09:32:44.921638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:07.296 [2024-12-09 09:32:44.932646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ef4b08 00:22:07.296 [2024-12-09 09:32:44.933940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:18414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.296 [2024-12-09 09:32:44.933974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:07.296 [2024-12-09 09:32:44.944973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ef5378 00:22:07.296 [2024-12-09 09:32:44.946263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.296 [2024-12-09 09:32:44.946299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:07.296 [2024-12-09 09:32:44.957364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ef5be8 00:22:07.296 [2024-12-09 09:32:44.958660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.296 [2024-12-09 09:32:44.958830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:07.297 [2024-12-09 09:32:44.969926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ef6458 00:22:07.297 [2024-12-09 09:32:44.971188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:9957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.297 [2024-12-09 09:32:44.971225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:07.297 [2024-12-09 09:32:44.982309] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ef6cc8 00:22:07.297 [2024-12-09 09:32:44.983557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.297 [2024-12-09 09:32:44.983590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:07.297 [2024-12-09 09:32:44.994677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ef7538 00:22:07.297 [2024-12-09 09:32:44.996032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.297 [2024-12-09 09:32:44.996067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:07.297 [2024-12-09 09:32:45.007182] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ef7da8 00:22:07.297 [2024-12-09 09:32:45.008408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.297 [2024-12-09 09:32:45.008443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:07.565 [2024-12-09 09:32:45.019573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ef8618 00:22:07.565 [2024-12-09 09:32:45.020782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.565 [2024-12-09 09:32:45.020816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:07.565 [2024-12-09 09:32:45.031890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ef8e88 00:22:07.565 [2024-12-09 09:32:45.033061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.565 [2024-12-09 09:32:45.033096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:07.565 [2024-12-09 09:32:45.044222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ef96f8 00:22:07.565 [2024-12-09 09:32:45.045396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.565 [2024-12-09 09:32:45.045429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:07.565 [2024-12-09 09:32:45.056607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ef9f68 00:22:07.565 [2024-12-09 09:32:45.057750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.565 [2024-12-09 09:32:45.057784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:07.565 [2024-12-09 09:32:45.068887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016efa7d8 00:22:07.565 [2024-12-09 09:32:45.070028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:25084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.565 [2024-12-09 09:32:45.070070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:07.565 [2024-12-09 09:32:45.081236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016efb048 00:22:07.565 [2024-12-09 09:32:45.082368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:10746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.565 [2024-12-09 09:32:45.082404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:07.565 [2024-12-09 09:32:45.093603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016efb8b8 00:22:07.565 [2024-12-09 09:32:45.094705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.565 [2024-12-09 09:32:45.094741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.565 [2024-12-09 09:32:45.105917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016efc128 00:22:07.565 [2024-12-09 09:32:45.107007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.565 [2024-12-09 09:32:45.107041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:07.565 [2024-12-09 09:32:45.118100] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016efc998 00:22:07.565 [2024-12-09 09:32:45.119186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.565 [2024-12-09 09:32:45.119221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:07.565 [2024-12-09 09:32:45.130297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016efd208 00:22:07.565 [2024-12-09 09:32:45.131368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.565 [2024-12-09 09:32:45.131534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:07.565 [2024-12-09 09:32:45.142671] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016efda78 00:22:07.565 [2024-12-09 09:32:45.143746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.566 [2024-12-09 09:32:45.143781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:07.566 [2024-12-09 09:32:45.155072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016efe2e8 00:22:07.566 [2024-12-09 09:32:45.156103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.566 [2024-12-09 09:32:45.156137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:07.566 [2024-12-09 09:32:45.167343] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016efeb58 00:22:07.566 [2024-12-09 09:32:45.168366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.566 [2024-12-09 09:32:45.168401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:07.566 [2024-12-09 09:32:45.184688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016efef90 00:22:07.566 [2024-12-09 09:32:45.186803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.566 [2024-12-09 09:32:45.186957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:07.566 [2024-12-09 09:32:45.197209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016efeb58 00:22:07.566 [2024-12-09 09:32:45.199180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.566 [2024-12-09 09:32:45.199215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:07.566 [2024-12-09 09:32:45.209544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016efe2e8 00:22:07.566 [2024-12-09 09:32:45.211483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.566 [2024-12-09 09:32:45.211517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:07.566 [2024-12-09 09:32:45.221920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016efda78 00:22:07.566 [2024-12-09 09:32:45.223843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.566 [2024-12-09 09:32:45.223875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:07.566 [2024-12-09 09:32:45.234296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016efd208 00:22:07.566 [2024-12-09 09:32:45.236200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.566 [2024-12-09 09:32:45.236233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:07.566 [2024-12-09 09:32:45.246700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016efc998 00:22:07.566 [2024-12-09 09:32:45.248586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.566 [2024-12-09 09:32:45.248619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:07.566 [2024-12-09 09:32:45.259056] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016efc128 00:22:07.566 [2024-12-09 09:32:45.260935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:25213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.566 [2024-12-09 09:32:45.260968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:07.566 [2024-12-09 09:32:45.271424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016efb8b8 00:22:07.566 [2024-12-09 09:32:45.273295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.566 [2024-12-09 09:32:45.273327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:07.566 [2024-12-09 09:32:45.283801] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016efb048 00:22:07.827 [2024-12-09 09:32:45.285636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.827 [2024-12-09 09:32:45.285666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:07.827 [2024-12-09 09:32:45.296139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016efa7d8 00:22:07.827 [2024-12-09 09:32:45.297968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.827 [2024-12-09 09:32:45.298000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:07.827 [2024-12-09 09:32:45.308493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ef9f68 00:22:07.827 [2024-12-09 09:32:45.310302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.827 [2024-12-09 09:32:45.310337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:07.827 [2024-12-09 09:32:45.320853] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ef96f8 00:22:07.827 [2024-12-09 09:32:45.322660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.827 [2024-12-09 09:32:45.322700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:07.827 [2024-12-09 09:32:45.333217] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ef8e88 00:22:07.827 [2024-12-09 09:32:45.335008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.827 [2024-12-09 09:32:45.335043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:07.827 [2024-12-09 09:32:45.345569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ef8618 00:22:07.827 [2024-12-09 09:32:45.347336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.827 [2024-12-09 09:32:45.347371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:07.827 [2024-12-09 09:32:45.357979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ef7da8 00:22:07.827 [2024-12-09 09:32:45.359743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.827 [2024-12-09 09:32:45.359776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:07.827 [2024-12-09 09:32:45.370317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ef7538 00:22:07.827 [2024-12-09 09:32:45.372064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.827 [2024-12-09 09:32:45.372098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:07.827 [2024-12-09 09:32:45.382674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ef6cc8 00:22:07.827 [2024-12-09 09:32:45.384381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.827 [2024-12-09 09:32:45.384413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:07.827 [2024-12-09 09:32:45.394998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ef6458 00:22:07.827 [2024-12-09 09:32:45.396703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:11314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.827 [2024-12-09 09:32:45.396733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:07.827 [2024-12-09 09:32:45.407280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ef5be8 00:22:07.827 [2024-12-09 09:32:45.408971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.827 [2024-12-09 09:32:45.409003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:07.827 [2024-12-09 09:32:45.419716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ef5378 00:22:07.827 [2024-12-09 09:32:45.421379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.827 [2024-12-09 09:32:45.421411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:07.827 [2024-12-09 09:32:45.432083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ef4b08 00:22:07.827 [2024-12-09 09:32:45.433764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:12666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.827 [2024-12-09 09:32:45.433798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:07.827 [2024-12-09 09:32:45.444441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ef4298 00:22:07.827 [2024-12-09 09:32:45.446102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.827 [2024-12-09 09:32:45.446135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:07.827 [2024-12-09 09:32:45.456850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ef3a28 00:22:07.827 [2024-12-09 09:32:45.458636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.827 [2024-12-09 09:32:45.458670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:07.827 [2024-12-09 09:32:45.469251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ef31b8 00:22:07.827 [2024-12-09 09:32:45.470898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:22859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.827 [2024-12-09 09:32:45.470931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:07.827 [2024-12-09 09:32:45.481645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1b70) with pdu=0x200016ef2948 00:22:07.827 [2024-12-09 09:32:45.483243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:14931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.827 [2024-12-09 09:32:45.483278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:07.827 00:22:07.827 Latency(us) 00:22:07.827 [2024-12-09T09:32:45.550Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:07.827 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:07.827 nvme0n1 : 2.00 20479.55 80.00 0.00 0.00 6245.02 3211.00 23582.43 00:22:07.827 [2024-12-09T09:32:45.550Z] =================================================================================================================== 00:22:07.827 [2024-12-09T09:32:45.550Z] Total : 20479.55 80.00 0.00 0.00 6245.02 3211.00 23582.43 00:22:07.827 { 00:22:07.827 "results": [ 00:22:07.827 { 00:22:07.828 "job": "nvme0n1", 00:22:07.828 "core_mask": "0x2", 00:22:07.828 "workload": "randwrite", 00:22:07.828 "status": "finished", 00:22:07.828 "queue_depth": 128, 00:22:07.828 "io_size": 4096, 00:22:07.828 "runtime": 2.001509, 00:22:07.828 "iops": 20479.548180897513, 00:22:07.828 "mibps": 79.99823508163091, 00:22:07.828 "io_failed": 0, 00:22:07.828 "io_timeout": 0, 00:22:07.828 "avg_latency_us": 6245.021030068065, 00:22:07.828 "min_latency_us": 3211.000803212851, 00:22:07.828 "max_latency_us": 23582.432128514058 00:22:07.828 } 00:22:07.828 ], 00:22:07.828 "core_count": 1 00:22:07.828 } 00:22:07.828 09:32:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:07.828 09:32:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:07.828 09:32:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:07.828 | .driver_specific 00:22:07.828 | .nvme_error 00:22:07.828 | .status_code 00:22:07.828 | .command_transient_transport_error' 00:22:07.828 09:32:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:08.086 09:32:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 160 > 0 )) 00:22:08.086 09:32:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80125 00:22:08.086 09:32:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80125 ']' 00:22:08.086 09:32:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80125 00:22:08.086 09:32:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:22:08.086 09:32:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:08.086 09:32:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80125 00:22:08.086 killing process with pid 80125 00:22:08.086 Received shutdown signal, test time was about 2.000000 seconds 00:22:08.086 00:22:08.086 Latency(us) 00:22:08.086 [2024-12-09T09:32:45.809Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:08.086 [2024-12-09T09:32:45.809Z] =================================================================================================================== 00:22:08.086 [2024-12-09T09:32:45.809Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:08.086 09:32:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:08.086 09:32:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:08.086 09:32:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80125' 00:22:08.086 09:32:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80125 00:22:08.086 09:32:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80125 00:22:08.345 09:32:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:22:08.345 09:32:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:22:08.345 09:32:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:22:08.345 09:32:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:22:08.345 09:32:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:22:08.345 09:32:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80181 00:22:08.345 09:32:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:22:08.345 09:32:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80181 /var/tmp/bperf.sock 00:22:08.345 09:32:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80181 ']' 00:22:08.345 09:32:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:08.345 09:32:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:08.345 09:32:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:08.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:08.345 09:32:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:08.345 09:32:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:08.345 [2024-12-09 09:32:45.996616] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:22:08.345 [2024-12-09 09:32:45.996815] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80181 ] 00:22:08.345 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:08.345 Zero copy mechanism will not be used. 00:22:08.603 [2024-12-09 09:32:46.147715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.603 [2024-12-09 09:32:46.196568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:08.603 [2024-12-09 09:32:46.239171] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:09.223 09:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:09.223 09:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:22:09.223 09:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:09.223 09:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:09.482 09:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:09.482 09:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.482 09:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:09.482 09:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.482 09:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:09.482 09:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:09.741 nvme0n1 00:22:09.741 09:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:22:09.741 09:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.741 09:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:09.741 09:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.741 09:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:09.741 09:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:09.741 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:09.741 Zero copy mechanism will not be used. 00:22:09.741 Running I/O for 2 seconds... 00:22:10.001 [2024-12-09 09:32:47.463673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.001 [2024-12-09 09:32:47.463905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.001 [2024-12-09 09:32:47.463938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.001 [2024-12-09 09:32:47.467649] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.001 [2024-12-09 09:32:47.467922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.001 [2024-12-09 09:32:47.467953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.001 [2024-12-09 09:32:47.471205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.001 [2024-12-09 09:32:47.471399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.001 [2024-12-09 09:32:47.471423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.001 [2024-12-09 09:32:47.475149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.001 [2024-12-09 09:32:47.475326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.001 [2024-12-09 09:32:47.475348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.001 [2024-12-09 09:32:47.479091] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.001 [2024-12-09 09:32:47.479269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.001 [2024-12-09 09:32:47.479291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.001 [2024-12-09 09:32:47.483022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.001 [2024-12-09 09:32:47.483112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.001 [2024-12-09 09:32:47.483134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.002 [2024-12-09 09:32:47.486880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.002 [2024-12-09 09:32:47.486955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.002 [2024-12-09 09:32:47.486977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.002 [2024-12-09 09:32:47.490594] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.002 [2024-12-09 09:32:47.490733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.002 [2024-12-09 09:32:47.490756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.002 [2024-12-09 09:32:47.494317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.002 [2024-12-09 09:32:47.494581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.002 [2024-12-09 09:32:47.494603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.002 [2024-12-09 09:32:47.498201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.002 [2024-12-09 09:32:47.498388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.002 [2024-12-09 09:32:47.498410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.002 [2024-12-09 09:32:47.501804] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.002 [2024-12-09 09:32:47.502101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.002 [2024-12-09 09:32:47.502122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.002 [2024-12-09 09:32:47.505439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.002 [2024-12-09 09:32:47.505630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.002 [2024-12-09 09:32:47.505673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.002 [2024-12-09 09:32:47.509260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.002 [2024-12-09 09:32:47.509443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.002 [2024-12-09 09:32:47.509466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.002 [2024-12-09 09:32:47.513106] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.002 [2024-12-09 09:32:47.513284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.002 [2024-12-09 09:32:47.513307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.002 [2024-12-09 09:32:47.516988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.002 [2024-12-09 09:32:47.517165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.002 [2024-12-09 09:32:47.517188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.002 [2024-12-09 09:32:47.520886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.002 [2024-12-09 09:32:47.520983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.002 [2024-12-09 09:32:47.521006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.002 [2024-12-09 09:32:47.524495] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.002 [2024-12-09 09:32:47.524651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.002 [2024-12-09 09:32:47.524673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.002 [2024-12-09 09:32:47.528316] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.002 [2024-12-09 09:32:47.528525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.002 [2024-12-09 09:32:47.528548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.002 [2024-12-09 09:32:47.532198] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.002 [2024-12-09 09:32:47.532370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.002 [2024-12-09 09:32:47.532393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.002 [2024-12-09 09:32:47.535783] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.002 [2024-12-09 09:32:47.536035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.002 [2024-12-09 09:32:47.536058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.002 [2024-12-09 09:32:47.539376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.002 [2024-12-09 09:32:47.539559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.002 [2024-12-09 09:32:47.539581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.002 [2024-12-09 09:32:47.543319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.002 [2024-12-09 09:32:47.543497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.002 [2024-12-09 09:32:47.543519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.002 [2024-12-09 09:32:47.547242] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.002 [2024-12-09 09:32:47.547411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.002 [2024-12-09 09:32:47.547432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.002 [2024-12-09 09:32:47.551148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.002 [2024-12-09 09:32:47.551343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.002 [2024-12-09 09:32:47.551365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.002 [2024-12-09 09:32:47.555000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.002 [2024-12-09 09:32:47.555166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.002 [2024-12-09 09:32:47.555189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.002 [2024-12-09 09:32:47.558840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.002 [2024-12-09 09:32:47.558969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.002 [2024-12-09 09:32:47.558991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.002 [2024-12-09 09:32:47.562619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.002 [2024-12-09 09:32:47.562711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.003 [2024-12-09 09:32:47.562734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.003 [2024-12-09 09:32:47.566340] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.003 [2024-12-09 09:32:47.566541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.003 [2024-12-09 09:32:47.566563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.003 [2024-12-09 09:32:47.569900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.003 [2024-12-09 09:32:47.570187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.003 [2024-12-09 09:32:47.570209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.003 [2024-12-09 09:32:47.573551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.003 [2024-12-09 09:32:47.573611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.003 [2024-12-09 09:32:47.573633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.003 [2024-12-09 09:32:47.577340] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.003 [2024-12-09 09:32:47.577541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.003 [2024-12-09 09:32:47.577564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.003 [2024-12-09 09:32:47.581211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.003 [2024-12-09 09:32:47.581386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.003 [2024-12-09 09:32:47.581409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.003 [2024-12-09 09:32:47.585171] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.003 [2024-12-09 09:32:47.585347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.003 [2024-12-09 09:32:47.585370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.003 [2024-12-09 09:32:47.589065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.003 [2024-12-09 09:32:47.589236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.003 [2024-12-09 09:32:47.589258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.003 [2024-12-09 09:32:47.592935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.003 [2024-12-09 09:32:47.593020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.003 [2024-12-09 09:32:47.593043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.003 [2024-12-09 09:32:47.596722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.003 [2024-12-09 09:32:47.596801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.003 [2024-12-09 09:32:47.596824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.003 [2024-12-09 09:32:47.600449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.003 [2024-12-09 09:32:47.600606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.003 [2024-12-09 09:32:47.600628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.003 [2024-12-09 09:32:47.603880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.003 [2024-12-09 09:32:47.604154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.003 [2024-12-09 09:32:47.604176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.003 [2024-12-09 09:32:47.607505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.003 [2024-12-09 09:32:47.607586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.003 [2024-12-09 09:32:47.607608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.003 [2024-12-09 09:32:47.611250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.003 [2024-12-09 09:32:47.611423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.003 [2024-12-09 09:32:47.611445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.003 [2024-12-09 09:32:47.615180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.003 [2024-12-09 09:32:47.615355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.003 [2024-12-09 09:32:47.615377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.003 [2024-12-09 09:32:47.619105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.003 [2024-12-09 09:32:47.619183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.003 [2024-12-09 09:32:47.619206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.003 [2024-12-09 09:32:47.622800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.003 [2024-12-09 09:32:47.622885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.003 [2024-12-09 09:32:47.622908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.003 [2024-12-09 09:32:47.626510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.003 [2024-12-09 09:32:47.626659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.003 [2024-12-09 09:32:47.626681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.003 [2024-12-09 09:32:47.630184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.003 [2024-12-09 09:32:47.630381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.003 [2024-12-09 09:32:47.630403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.003 [2024-12-09 09:32:47.634191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.003 [2024-12-09 09:32:47.634377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.003 [2024-12-09 09:32:47.634399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.003 [2024-12-09 09:32:47.637798] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.003 [2024-12-09 09:32:47.638096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.003 [2024-12-09 09:32:47.638119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.003 [2024-12-09 09:32:47.641533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.003 [2024-12-09 09:32:47.641591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.003 [2024-12-09 09:32:47.641614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.003 [2024-12-09 09:32:47.645302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.003 [2024-12-09 09:32:47.645475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.003 [2024-12-09 09:32:47.645499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.003 [2024-12-09 09:32:47.649203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.003 [2024-12-09 09:32:47.649365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.003 [2024-12-09 09:32:47.649387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.003 [2024-12-09 09:32:47.653088] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.003 [2024-12-09 09:32:47.653252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.003 [2024-12-09 09:32:47.653275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.003 [2024-12-09 09:32:47.657003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.003 [2024-12-09 09:32:47.657165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.003 [2024-12-09 09:32:47.657187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.003 [2024-12-09 09:32:47.660897] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.003 [2024-12-09 09:32:47.661050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.003 [2024-12-09 09:32:47.661072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.003 [2024-12-09 09:32:47.664744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.004 [2024-12-09 09:32:47.664829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.004 [2024-12-09 09:32:47.664852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.004 [2024-12-09 09:32:47.668582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.004 [2024-12-09 09:32:47.668644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.004 [2024-12-09 09:32:47.668667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.004 [2024-12-09 09:32:47.671873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.004 [2024-12-09 09:32:47.672327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.004 [2024-12-09 09:32:47.672360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.004 [2024-12-09 09:32:47.675771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.004 [2024-12-09 09:32:47.675867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.004 [2024-12-09 09:32:47.675889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.004 [2024-12-09 09:32:47.679565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.004 [2024-12-09 09:32:47.679635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.004 [2024-12-09 09:32:47.679675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.004 [2024-12-09 09:32:47.683349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.004 [2024-12-09 09:32:47.683527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.004 [2024-12-09 09:32:47.683550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.004 [2024-12-09 09:32:47.687231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.004 [2024-12-09 09:32:47.687425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.004 [2024-12-09 09:32:47.687447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.004 [2024-12-09 09:32:47.691098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.004 [2024-12-09 09:32:47.691176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.004 [2024-12-09 09:32:47.691198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.004 [2024-12-09 09:32:47.694924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.004 [2024-12-09 09:32:47.695112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.004 [2024-12-09 09:32:47.695134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.004 [2024-12-09 09:32:47.698806] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.004 [2024-12-09 09:32:47.698956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.004 [2024-12-09 09:32:47.698978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.004 [2024-12-09 09:32:47.702203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.004 [2024-12-09 09:32:47.702491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.004 [2024-12-09 09:32:47.702513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.004 [2024-12-09 09:32:47.705899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.004 [2024-12-09 09:32:47.706072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.004 [2024-12-09 09:32:47.706094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.004 [2024-12-09 09:32:47.709788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.004 [2024-12-09 09:32:47.709854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.004 [2024-12-09 09:32:47.709876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.004 [2024-12-09 09:32:47.713611] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.004 [2024-12-09 09:32:47.713683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.004 [2024-12-09 09:32:47.713706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.004 [2024-12-09 09:32:47.717394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.004 [2024-12-09 09:32:47.717582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.004 [2024-12-09 09:32:47.717605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.264 [2024-12-09 09:32:47.721289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.264 [2024-12-09 09:32:47.721491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.264 [2024-12-09 09:32:47.721514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.264 [2024-12-09 09:32:47.725296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.265 [2024-12-09 09:32:47.725494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.265 [2024-12-09 09:32:47.725517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.265 [2024-12-09 09:32:47.729247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.265 [2024-12-09 09:32:47.729418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.265 [2024-12-09 09:32:47.729440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.265 [2024-12-09 09:32:47.733189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.265 [2024-12-09 09:32:47.733381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.265 [2024-12-09 09:32:47.733402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.265 [2024-12-09 09:32:47.736737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.265 [2024-12-09 09:32:47.737021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.265 [2024-12-09 09:32:47.737044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.265 [2024-12-09 09:32:47.740369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.265 [2024-12-09 09:32:47.740554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.265 [2024-12-09 09:32:47.740577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.265 [2024-12-09 09:32:47.744280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.265 [2024-12-09 09:32:47.744463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.265 [2024-12-09 09:32:47.744498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.265 [2024-12-09 09:32:47.748204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.265 [2024-12-09 09:32:47.748390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.265 [2024-12-09 09:32:47.748412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.265 [2024-12-09 09:32:47.752095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.265 [2024-12-09 09:32:47.752288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.265 [2024-12-09 09:32:47.752310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.265 [2024-12-09 09:32:47.755952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.265 [2024-12-09 09:32:47.756175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.265 [2024-12-09 09:32:47.756197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.265 [2024-12-09 09:32:47.759860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.265 [2024-12-09 09:32:47.759998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.265 [2024-12-09 09:32:47.760020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.265 [2024-12-09 09:32:47.763740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.265 [2024-12-09 09:32:47.763858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.265 [2024-12-09 09:32:47.763881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.265 [2024-12-09 09:32:47.767568] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.265 [2024-12-09 09:32:47.767739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.265 [2024-12-09 09:32:47.767768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.265 [2024-12-09 09:32:47.771040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.265 [2024-12-09 09:32:47.771308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.265 [2024-12-09 09:32:47.771330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.265 [2024-12-09 09:32:47.774740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.265 [2024-12-09 09:32:47.774852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.265 [2024-12-09 09:32:47.775141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.265 [2024-12-09 09:32:47.778693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.265 [2024-12-09 09:32:47.778884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.265 [2024-12-09 09:32:47.779039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.265 [2024-12-09 09:32:47.782599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.265 [2024-12-09 09:32:47.782788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.265 [2024-12-09 09:32:47.782954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.265 [2024-12-09 09:32:47.786545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.265 [2024-12-09 09:32:47.786739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.265 [2024-12-09 09:32:47.787044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.265 [2024-12-09 09:32:47.790560] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.265 [2024-12-09 09:32:47.790771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.265 [2024-12-09 09:32:47.790923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.265 [2024-12-09 09:32:47.794438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.265 [2024-12-09 09:32:47.794641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.265 [2024-12-09 09:32:47.794797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.265 [2024-12-09 09:32:47.798242] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.265 [2024-12-09 09:32:47.798429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.265 [2024-12-09 09:32:47.798602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.265 [2024-12-09 09:32:47.802142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.265 [2024-12-09 09:32:47.802369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.265 [2024-12-09 09:32:47.802533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.265 [2024-12-09 09:32:47.805892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.265 [2024-12-09 09:32:47.806185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.265 [2024-12-09 09:32:47.806341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.265 [2024-12-09 09:32:47.809750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.265 [2024-12-09 09:32:47.809815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.265 [2024-12-09 09:32:47.809839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.265 [2024-12-09 09:32:47.813526] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.265 [2024-12-09 09:32:47.813583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.265 [2024-12-09 09:32:47.813606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.265 [2024-12-09 09:32:47.817224] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.265 [2024-12-09 09:32:47.817395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.265 [2024-12-09 09:32:47.817417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.265 [2024-12-09 09:32:47.821146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.265 [2024-12-09 09:32:47.821321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.265 [2024-12-09 09:32:47.821343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.265 [2024-12-09 09:32:47.825055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.265 [2024-12-09 09:32:47.825240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.266 [2024-12-09 09:32:47.825262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.266 [2024-12-09 09:32:47.828899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.266 [2024-12-09 09:32:47.829031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.266 [2024-12-09 09:32:47.829053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.266 [2024-12-09 09:32:47.832650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.266 [2024-12-09 09:32:47.832753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.266 [2024-12-09 09:32:47.832775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.266 [2024-12-09 09:32:47.836429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.266 [2024-12-09 09:32:47.836619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.266 [2024-12-09 09:32:47.836642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.266 [2024-12-09 09:32:47.840066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.266 [2024-12-09 09:32:47.840335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.266 [2024-12-09 09:32:47.840357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.266 [2024-12-09 09:32:47.843686] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.266 [2024-12-09 09:32:47.843749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.266 [2024-12-09 09:32:47.843770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.266 [2024-12-09 09:32:47.847345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.266 [2024-12-09 09:32:47.847528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.266 [2024-12-09 09:32:47.847550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.266 [2024-12-09 09:32:47.851193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.266 [2024-12-09 09:32:47.851362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.266 [2024-12-09 09:32:47.851384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.266 [2024-12-09 09:32:47.855069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.266 [2024-12-09 09:32:47.855129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.266 [2024-12-09 09:32:47.855152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.266 [2024-12-09 09:32:47.858817] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.266 [2024-12-09 09:32:47.858931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.266 [2024-12-09 09:32:47.858953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.266 [2024-12-09 09:32:47.862444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.266 [2024-12-09 09:32:47.862620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.266 [2024-12-09 09:32:47.862642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.266 [2024-12-09 09:32:47.866339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.266 [2024-12-09 09:32:47.866440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.266 [2024-12-09 09:32:47.866476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.266 [2024-12-09 09:32:47.870340] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.266 [2024-12-09 09:32:47.870589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.266 [2024-12-09 09:32:47.870748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.266 [2024-12-09 09:32:47.874034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.266 [2024-12-09 09:32:47.874294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.266 [2024-12-09 09:32:47.874485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.266 [2024-12-09 09:32:47.877877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.266 [2024-12-09 09:32:47.878073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.266 [2024-12-09 09:32:47.878250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.266 [2024-12-09 09:32:47.881897] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.266 [2024-12-09 09:32:47.882088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.266 [2024-12-09 09:32:47.882293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.266 [2024-12-09 09:32:47.885794] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.266 [2024-12-09 09:32:47.885975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.266 [2024-12-09 09:32:47.886244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.266 [2024-12-09 09:32:47.889720] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.266 [2024-12-09 09:32:47.889906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.266 [2024-12-09 09:32:47.890184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.266 [2024-12-09 09:32:47.893779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.266 [2024-12-09 09:32:47.893966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.266 [2024-12-09 09:32:47.894288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.266 [2024-12-09 09:32:47.897818] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.266 [2024-12-09 09:32:47.898002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.266 [2024-12-09 09:32:47.898162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.266 [2024-12-09 09:32:47.901695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.266 [2024-12-09 09:32:47.901922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.266 [2024-12-09 09:32:47.902126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.266 [2024-12-09 09:32:47.905638] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.266 [2024-12-09 09:32:47.905819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.266 [2024-12-09 09:32:47.905842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.266 [2024-12-09 09:32:47.909176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.266 [2024-12-09 09:32:47.909433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.266 [2024-12-09 09:32:47.909456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.266 [2024-12-09 09:32:47.912756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.266 [2024-12-09 09:32:47.912913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.266 [2024-12-09 09:32:47.912935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.266 [2024-12-09 09:32:47.916576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.266 [2024-12-09 09:32:47.916636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.266 [2024-12-09 09:32:47.916658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.266 [2024-12-09 09:32:47.920238] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.266 [2024-12-09 09:32:47.920423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.267 [2024-12-09 09:32:47.920446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.267 [2024-12-09 09:32:47.924152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.267 [2024-12-09 09:32:47.924343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.267 [2024-12-09 09:32:47.924365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.267 [2024-12-09 09:32:47.927974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.267 [2024-12-09 09:32:47.928142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.267 [2024-12-09 09:32:47.928165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.267 [2024-12-09 09:32:47.931796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.267 [2024-12-09 09:32:47.931888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.267 [2024-12-09 09:32:47.931911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.267 [2024-12-09 09:32:47.935565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.267 [2024-12-09 09:32:47.935699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.267 [2024-12-09 09:32:47.935721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.267 [2024-12-09 09:32:47.939279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.267 [2024-12-09 09:32:47.939497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.267 [2024-12-09 09:32:47.939520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.267 [2024-12-09 09:32:47.942903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.267 [2024-12-09 09:32:47.943155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.267 [2024-12-09 09:32:47.943184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.267 [2024-12-09 09:32:47.946503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.267 [2024-12-09 09:32:47.946571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.267 [2024-12-09 09:32:47.946592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.267 [2024-12-09 09:32:47.950194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.267 [2024-12-09 09:32:47.950367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.267 [2024-12-09 09:32:47.950388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.267 [2024-12-09 09:32:47.954119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.267 [2024-12-09 09:32:47.954180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.267 [2024-12-09 09:32:47.954203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.267 [2024-12-09 09:32:47.958131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.267 [2024-12-09 09:32:47.958320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.267 [2024-12-09 09:32:47.958527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.267 [2024-12-09 09:32:47.961996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.267 [2024-12-09 09:32:47.962201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.267 [2024-12-09 09:32:47.962364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.267 [2024-12-09 09:32:47.965904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.267 [2024-12-09 09:32:47.966125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.267 [2024-12-09 09:32:47.966479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.267 [2024-12-09 09:32:47.970072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.267 [2024-12-09 09:32:47.970285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.267 [2024-12-09 09:32:47.970435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.267 [2024-12-09 09:32:47.974077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.267 [2024-12-09 09:32:47.974266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.267 [2024-12-09 09:32:47.974499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.267 [2024-12-09 09:32:47.977723] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.267 [2024-12-09 09:32:47.977981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.267 [2024-12-09 09:32:47.978338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.267 [2024-12-09 09:32:47.981709] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.267 [2024-12-09 09:32:47.981894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.267 [2024-12-09 09:32:47.982064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.536 [2024-12-09 09:32:47.985819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.536 [2024-12-09 09:32:47.986006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.536 [2024-12-09 09:32:47.986164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.536 [2024-12-09 09:32:47.989688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.536 [2024-12-09 09:32:47.989878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.536 [2024-12-09 09:32:47.990263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.536 [2024-12-09 09:32:47.993906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.536 [2024-12-09 09:32:47.994080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-09 09:32:47.994105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.537 [2024-12-09 09:32:47.997724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.537 [2024-12-09 09:32:47.997779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-09 09:32:47.997802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.537 [2024-12-09 09:32:48.001452] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.537 [2024-12-09 09:32:48.001654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-09 09:32:48.001676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.537 [2024-12-09 09:32:48.005364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.537 [2024-12-09 09:32:48.005541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-09 09:32:48.005564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.537 [2024-12-09 09:32:48.009394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.537 [2024-12-09 09:32:48.009599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-09 09:32:48.009621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.537 [2024-12-09 09:32:48.013279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.537 [2024-12-09 09:32:48.013471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-09 09:32:48.013510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.537 [2024-12-09 09:32:48.016804] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.537 [2024-12-09 09:32:48.017080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-09 09:32:48.017103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.537 [2024-12-09 09:32:48.020493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.537 [2024-12-09 09:32:48.020562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-09 09:32:48.020584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.537 [2024-12-09 09:32:48.024243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.537 [2024-12-09 09:32:48.024410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-09 09:32:48.024432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.537 [2024-12-09 09:32:48.028204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.537 [2024-12-09 09:32:48.028403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-09 09:32:48.028425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.537 [2024-12-09 09:32:48.032116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.537 [2024-12-09 09:32:48.032325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-09 09:32:48.032346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.537 [2024-12-09 09:32:48.036049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.537 [2024-12-09 09:32:48.036217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-09 09:32:48.036238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.537 [2024-12-09 09:32:48.039999] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.537 [2024-12-09 09:32:48.040102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-09 09:32:48.040124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.537 [2024-12-09 09:32:48.043766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.537 [2024-12-09 09:32:48.043840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-09 09:32:48.043861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.537 [2024-12-09 09:32:48.047554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.537 [2024-12-09 09:32:48.047610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-09 09:32:48.047632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.537 [2024-12-09 09:32:48.050896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.537 [2024-12-09 09:32:48.051357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-09 09:32:48.051389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.537 [2024-12-09 09:32:48.054730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.537 [2024-12-09 09:32:48.054810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-09 09:32:48.054832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.537 [2024-12-09 09:32:48.058440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.537 [2024-12-09 09:32:48.058619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-09 09:32:48.058642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.537 [2024-12-09 09:32:48.062297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.537 [2024-12-09 09:32:48.062477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-09 09:32:48.062500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.537 [2024-12-09 09:32:48.066194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.537 [2024-12-09 09:32:48.066316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-09 09:32:48.066338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.537 [2024-12-09 09:32:48.069840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.537 [2024-12-09 09:32:48.070014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-09 09:32:48.070036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.537 [2024-12-09 09:32:48.073804] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.537 [2024-12-09 09:32:48.073913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-09 09:32:48.073936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.538 [2024-12-09 09:32:48.077588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.538 [2024-12-09 09:32:48.077730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-09 09:32:48.077752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.538 [2024-12-09 09:32:48.080918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.538 [2024-12-09 09:32:48.081185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-09 09:32:48.081207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.538 [2024-12-09 09:32:48.084632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.538 [2024-12-09 09:32:48.084716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-09 09:32:48.084738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.538 [2024-12-09 09:32:48.088406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.538 [2024-12-09 09:32:48.088584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-09 09:32:48.088606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.538 [2024-12-09 09:32:48.092239] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.538 [2024-12-09 09:32:48.092438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-09 09:32:48.092460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.538 [2024-12-09 09:32:48.096173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.538 [2024-12-09 09:32:48.096348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-09 09:32:48.096371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.538 [2024-12-09 09:32:48.100037] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.538 [2024-12-09 09:32:48.100208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-09 09:32:48.100232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.538 [2024-12-09 09:32:48.103972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.538 [2024-12-09 09:32:48.104068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-09 09:32:48.104091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.538 [2024-12-09 09:32:48.107793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.538 [2024-12-09 09:32:48.107888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-09 09:32:48.107911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.538 [2024-12-09 09:32:48.111541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.538 [2024-12-09 09:32:48.111617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-09 09:32:48.111656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.538 [2024-12-09 09:32:48.114917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.538 [2024-12-09 09:32:48.115353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-09 09:32:48.115387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.538 [2024-12-09 09:32:48.118645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.538 [2024-12-09 09:32:48.118728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-09 09:32:48.118750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.538 [2024-12-09 09:32:48.122377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.538 [2024-12-09 09:32:48.122554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-09 09:32:48.122576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.538 [2024-12-09 09:32:48.126353] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.538 [2024-12-09 09:32:48.126408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-09 09:32:48.126429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.538 [2024-12-09 09:32:48.130201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.538 [2024-12-09 09:32:48.130376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-09 09:32:48.130399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.538 [2024-12-09 09:32:48.134126] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.538 [2024-12-09 09:32:48.134291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-09 09:32:48.134314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.538 [2024-12-09 09:32:48.138014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.538 [2024-12-09 09:32:48.138226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-09 09:32:48.138248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.538 [2024-12-09 09:32:48.141960] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.538 [2024-12-09 09:32:48.142151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-09 09:32:48.142307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.538 [2024-12-09 09:32:48.145552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.538 [2024-12-09 09:32:48.145818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-09 09:32:48.145990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.538 [2024-12-09 09:32:48.149355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.539 [2024-12-09 09:32:48.149549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-09 09:32:48.149787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.539 [2024-12-09 09:32:48.153309] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.539 [2024-12-09 09:32:48.153510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-09 09:32:48.153745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.539 [2024-12-09 09:32:48.157168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.539 [2024-12-09 09:32:48.157380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-09 09:32:48.157569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.539 [2024-12-09 09:32:48.161072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.539 [2024-12-09 09:32:48.161267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-09 09:32:48.161417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.539 [2024-12-09 09:32:48.164951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.539 [2024-12-09 09:32:48.165138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-09 09:32:48.165321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.539 [2024-12-09 09:32:48.168840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.539 [2024-12-09 09:32:48.169027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-09 09:32:48.169175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.539 [2024-12-09 09:32:48.172752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.539 [2024-12-09 09:32:48.172996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-09 09:32:48.173143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.539 [2024-12-09 09:32:48.176342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.539 [2024-12-09 09:32:48.176614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-09 09:32:48.176637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.539 [2024-12-09 09:32:48.179988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.539 [2024-12-09 09:32:48.180165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-09 09:32:48.180188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.539 [2024-12-09 09:32:48.183835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.539 [2024-12-09 09:32:48.183896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-09 09:32:48.183919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.539 [2024-12-09 09:32:48.187633] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.539 [2024-12-09 09:32:48.187689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-09 09:32:48.187712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.539 [2024-12-09 09:32:48.191351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.539 [2024-12-09 09:32:48.191588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-09 09:32:48.191610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.539 [2024-12-09 09:32:48.195219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.539 [2024-12-09 09:32:48.195413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-09 09:32:48.195436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.539 [2024-12-09 09:32:48.199082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.539 [2024-12-09 09:32:48.199277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-09 09:32:48.199299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.539 [2024-12-09 09:32:48.202978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.539 [2024-12-09 09:32:48.203153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-09 09:32:48.203175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.539 [2024-12-09 09:32:48.206955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.539 [2024-12-09 09:32:48.207155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-09 09:32:48.207326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.539 [2024-12-09 09:32:48.210625] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.539 [2024-12-09 09:32:48.210907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-09 09:32:48.211085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.539 [2024-12-09 09:32:48.214519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.539 [2024-12-09 09:32:48.214713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-09 09:32:48.214872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.539 [2024-12-09 09:32:48.218546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.539 [2024-12-09 09:32:48.218732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-09 09:32:48.218879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.539 [2024-12-09 09:32:48.222354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.539 [2024-12-09 09:32:48.222556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-09 09:32:48.222738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.539 [2024-12-09 09:32:48.226312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.539 [2024-12-09 09:32:48.226521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-09 09:32:48.226709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.540 [2024-12-09 09:32:48.230505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.540 [2024-12-09 09:32:48.230696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-09 09:32:48.230861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.540 [2024-12-09 09:32:48.234434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.540 [2024-12-09 09:32:48.234672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-09 09:32:48.234903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.540 [2024-12-09 09:32:48.238407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.540 [2024-12-09 09:32:48.238661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-09 09:32:48.238810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.540 [2024-12-09 09:32:48.242224] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.540 [2024-12-09 09:32:48.242413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-09 09:32:48.242437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.540 [2024-12-09 09:32:48.246181] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.540 [2024-12-09 09:32:48.246412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-09 09:32:48.246434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.540 [2024-12-09 09:32:48.249681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.540 [2024-12-09 09:32:48.249928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-09 09:32:48.249950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.540 [2024-12-09 09:32:48.253228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.540 [2024-12-09 09:32:48.253287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-09 09:32:48.253309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.801 [2024-12-09 09:32:48.257007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.801 [2024-12-09 09:32:48.257186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.801 [2024-12-09 09:32:48.257208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.801 [2024-12-09 09:32:48.260885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.801 [2024-12-09 09:32:48.260999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.801 [2024-12-09 09:32:48.261021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.801 [2024-12-09 09:32:48.264678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.801 [2024-12-09 09:32:48.264751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.801 [2024-12-09 09:32:48.264773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.801 [2024-12-09 09:32:48.268443] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.801 [2024-12-09 09:32:48.268655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.801 [2024-12-09 09:32:48.268678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.801 [2024-12-09 09:32:48.272354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.801 [2024-12-09 09:32:48.272568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.801 [2024-12-09 09:32:48.272590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.801 [2024-12-09 09:32:48.276289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.801 [2024-12-09 09:32:48.276480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.801 [2024-12-09 09:32:48.276518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.801 [2024-12-09 09:32:48.280241] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.801 [2024-12-09 09:32:48.280438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.801 [2024-12-09 09:32:48.280460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.801 [2024-12-09 09:32:48.283857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.801 [2024-12-09 09:32:48.284114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.801 [2024-12-09 09:32:48.284136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.801 [2024-12-09 09:32:48.287480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.801 [2024-12-09 09:32:48.287548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.801 [2024-12-09 09:32:48.287570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.801 [2024-12-09 09:32:48.291142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.801 [2024-12-09 09:32:48.291321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.801 [2024-12-09 09:32:48.291342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.801 [2024-12-09 09:32:48.294979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.801 [2024-12-09 09:32:48.295039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.801 [2024-12-09 09:32:48.295061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.801 [2024-12-09 09:32:48.298734] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.801 [2024-12-09 09:32:48.298792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.801 [2024-12-09 09:32:48.298814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.801 [2024-12-09 09:32:48.302487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.801 [2024-12-09 09:32:48.302584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.801 [2024-12-09 09:32:48.302606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.801 [2024-12-09 09:32:48.306348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.801 [2024-12-09 09:32:48.306574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.801 [2024-12-09 09:32:48.306596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.801 [2024-12-09 09:32:48.310274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.801 [2024-12-09 09:32:48.310485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.801 [2024-12-09 09:32:48.310507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.801 [2024-12-09 09:32:48.314219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.801 [2024-12-09 09:32:48.314421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.801 [2024-12-09 09:32:48.314443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.801 [2024-12-09 09:32:48.317717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.801 [2024-12-09 09:32:48.317982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.801 [2024-12-09 09:32:48.318004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.801 [2024-12-09 09:32:48.321379] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.801 [2024-12-09 09:32:48.321546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.802 [2024-12-09 09:32:48.321568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.802 [2024-12-09 09:32:48.325240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.802 [2024-12-09 09:32:48.325399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.802 [2024-12-09 09:32:48.325421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.802 [2024-12-09 09:32:48.329136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.802 [2024-12-09 09:32:48.329297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.802 [2024-12-09 09:32:48.329319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.802 [2024-12-09 09:32:48.333039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.802 [2024-12-09 09:32:48.333199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.802 [2024-12-09 09:32:48.333221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.802 [2024-12-09 09:32:48.336939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.802 [2024-12-09 09:32:48.337047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.802 [2024-12-09 09:32:48.337069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.802 [2024-12-09 09:32:48.340597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.802 [2024-12-09 09:32:48.340739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.802 [2024-12-09 09:32:48.340763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.802 [2024-12-09 09:32:48.344299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.802 [2024-12-09 09:32:48.344526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.802 [2024-12-09 09:32:48.344549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.802 [2024-12-09 09:32:48.348293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.802 [2024-12-09 09:32:48.348494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.802 [2024-12-09 09:32:48.348517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.802 [2024-12-09 09:32:48.351926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.802 [2024-12-09 09:32:48.352202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.802 [2024-12-09 09:32:48.352224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.802 [2024-12-09 09:32:48.355550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.802 [2024-12-09 09:32:48.355618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.802 [2024-12-09 09:32:48.355639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.802 [2024-12-09 09:32:48.359231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.802 [2024-12-09 09:32:48.359399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.802 [2024-12-09 09:32:48.359420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.802 [2024-12-09 09:32:48.363090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.802 [2024-12-09 09:32:48.363271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.802 [2024-12-09 09:32:48.363293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.802 [2024-12-09 09:32:48.367064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.802 [2024-12-09 09:32:48.367264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.802 [2024-12-09 09:32:48.367285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.802 [2024-12-09 09:32:48.370968] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.802 [2024-12-09 09:32:48.371049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.802 [2024-12-09 09:32:48.371071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.802 [2024-12-09 09:32:48.374688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.802 [2024-12-09 09:32:48.374830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.802 [2024-12-09 09:32:48.374851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.802 [2024-12-09 09:32:48.378846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.802 [2024-12-09 09:32:48.378928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.802 [2024-12-09 09:32:48.378950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.802 [2024-12-09 09:32:48.382561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.802 [2024-12-09 09:32:48.382629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.802 [2024-12-09 09:32:48.382651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.802 [2024-12-09 09:32:48.385864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.802 [2024-12-09 09:32:48.386333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.802 [2024-12-09 09:32:48.386365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.802 [2024-12-09 09:32:48.389666] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.802 [2024-12-09 09:32:48.389745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.802 [2024-12-09 09:32:48.389767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.802 [2024-12-09 09:32:48.393330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.802 [2024-12-09 09:32:48.393505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.802 [2024-12-09 09:32:48.393529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.802 [2024-12-09 09:32:48.397153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.802 [2024-12-09 09:32:48.397315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.802 [2024-12-09 09:32:48.397337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.802 [2024-12-09 09:32:48.400993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.802 [2024-12-09 09:32:48.401162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.802 [2024-12-09 09:32:48.401185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.802 [2024-12-09 09:32:48.404897] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.802 [2024-12-09 09:32:48.404983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.802 [2024-12-09 09:32:48.405006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.802 [2024-12-09 09:32:48.408678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.802 [2024-12-09 09:32:48.408740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.802 [2024-12-09 09:32:48.408764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.802 [2024-12-09 09:32:48.412395] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.802 [2024-12-09 09:32:48.412634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.802 [2024-12-09 09:32:48.412657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.802 [2024-12-09 09:32:48.416067] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.802 [2024-12-09 09:32:48.416304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.802 [2024-12-09 09:32:48.416334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.802 [2024-12-09 09:32:48.419672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.802 [2024-12-09 09:32:48.419843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.802 [2024-12-09 09:32:48.419866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.802 [2024-12-09 09:32:48.423545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.803 [2024-12-09 09:32:48.423616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.803 [2024-12-09 09:32:48.423638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.803 [2024-12-09 09:32:48.427344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.803 [2024-12-09 09:32:48.427521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.803 [2024-12-09 09:32:48.427543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.803 [2024-12-09 09:32:48.431235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.803 [2024-12-09 09:32:48.431404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.803 [2024-12-09 09:32:48.431426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.803 [2024-12-09 09:32:48.435184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.803 [2024-12-09 09:32:48.435344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.803 [2024-12-09 09:32:48.435367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.803 [2024-12-09 09:32:48.439075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.803 [2024-12-09 09:32:48.439183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.803 [2024-12-09 09:32:48.439204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.803 [2024-12-09 09:32:48.443042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.803 [2024-12-09 09:32:48.443277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.803 [2024-12-09 09:32:48.443300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.803 [2024-12-09 09:32:48.446964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.803 [2024-12-09 09:32:48.447026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.803 [2024-12-09 09:32:48.447048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.803 [2024-12-09 09:32:48.450285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.803 [2024-12-09 09:32:48.450750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.803 [2024-12-09 09:32:48.450782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.803 [2024-12-09 09:32:48.454039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.803 [2024-12-09 09:32:48.454231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.803 [2024-12-09 09:32:48.454253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.803 [2024-12-09 09:32:48.457964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.803 [2024-12-09 09:32:48.458023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.803 [2024-12-09 09:32:48.458056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.803 8103.00 IOPS, 1012.88 MiB/s [2024-12-09T09:32:48.526Z] [2024-12-09 09:32:48.462754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.803 [2024-12-09 09:32:48.462813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.803 [2024-12-09 09:32:48.462836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.803 [2024-12-09 09:32:48.466570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.803 [2024-12-09 09:32:48.466638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.803 [2024-12-09 09:32:48.466661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.803 [2024-12-09 09:32:48.470345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.803 [2024-12-09 09:32:48.470418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.803 [2024-12-09 09:32:48.470441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.803 [2024-12-09 09:32:48.474184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.803 [2024-12-09 09:32:48.474295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.803 [2024-12-09 09:32:48.474317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.803 [2024-12-09 09:32:48.477976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.803 [2024-12-09 09:32:48.478129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.803 [2024-12-09 09:32:48.478151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.803 [2024-12-09 09:32:48.481329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.803 [2024-12-09 09:32:48.481624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.803 [2024-12-09 09:32:48.481648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.803 [2024-12-09 09:32:48.484934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.803 [2024-12-09 09:32:48.485100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.803 [2024-12-09 09:32:48.485121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.803 [2024-12-09 09:32:48.488890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.803 [2024-12-09 09:32:48.488951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.803 [2024-12-09 09:32:48.488972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.803 [2024-12-09 09:32:48.492647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.803 [2024-12-09 09:32:48.492723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.803 [2024-12-09 09:32:48.492746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.803 [2024-12-09 09:32:48.496368] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.803 [2024-12-09 09:32:48.496568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.803 [2024-12-09 09:32:48.496590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.803 [2024-12-09 09:32:48.500256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.803 [2024-12-09 09:32:48.500420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.803 [2024-12-09 09:32:48.500442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.803 [2024-12-09 09:32:48.504122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.803 [2024-12-09 09:32:48.504316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.803 [2024-12-09 09:32:48.504338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.803 [2024-12-09 09:32:48.508039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.803 [2024-12-09 09:32:48.508234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.803 [2024-12-09 09:32:48.508256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.803 [2024-12-09 09:32:48.512004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.803 [2024-12-09 09:32:48.512191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.803 [2024-12-09 09:32:48.512213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.803 [2024-12-09 09:32:48.515743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.803 [2024-12-09 09:32:48.516012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.803 [2024-12-09 09:32:48.516176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.803 [2024-12-09 09:32:48.519575] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:10.803 [2024-12-09 09:32:48.519766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.803 [2024-12-09 09:32:48.520029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.065 [2024-12-09 09:32:48.523729] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.065 [2024-12-09 09:32:48.523913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.065 [2024-12-09 09:32:48.524063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.065 [2024-12-09 09:32:48.527674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.065 [2024-12-09 09:32:48.527857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.065 [2024-12-09 09:32:48.528025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.065 [2024-12-09 09:32:48.531562] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.065 [2024-12-09 09:32:48.531769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.065 [2024-12-09 09:32:48.531923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.065 [2024-12-09 09:32:48.535388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.065 [2024-12-09 09:32:48.535603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.065 [2024-12-09 09:32:48.535749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.065 [2024-12-09 09:32:48.539251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.065 [2024-12-09 09:32:48.539488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.065 [2024-12-09 09:32:48.539668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.065 [2024-12-09 09:32:48.543164] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.065 [2024-12-09 09:32:48.543407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.065 [2024-12-09 09:32:48.543566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.065 [2024-12-09 09:32:48.547249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.065 [2024-12-09 09:32:48.547498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.065 [2024-12-09 09:32:48.547660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.065 [2024-12-09 09:32:48.550821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.065 [2024-12-09 09:32:48.551070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.065 [2024-12-09 09:32:48.551094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.065 [2024-12-09 09:32:48.554424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.065 [2024-12-09 09:32:48.554604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.065 [2024-12-09 09:32:48.554627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.065 [2024-12-09 09:32:48.558299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.065 [2024-12-09 09:32:48.558481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.065 [2024-12-09 09:32:48.558505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.065 [2024-12-09 09:32:48.562190] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.065 [2024-12-09 09:32:48.562355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.065 [2024-12-09 09:32:48.562378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.066 [2024-12-09 09:32:48.566163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.066 [2024-12-09 09:32:48.566343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.066 [2024-12-09 09:32:48.566365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.066 [2024-12-09 09:32:48.570068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.066 [2024-12-09 09:32:48.570254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.066 [2024-12-09 09:32:48.570275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.066 [2024-12-09 09:32:48.573919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.066 [2024-12-09 09:32:48.573981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.066 [2024-12-09 09:32:48.574004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.066 [2024-12-09 09:32:48.577746] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.066 [2024-12-09 09:32:48.577833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.066 [2024-12-09 09:32:48.577855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.066 [2024-12-09 09:32:48.581444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.066 [2024-12-09 09:32:48.581645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.066 [2024-12-09 09:32:48.581667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.066 [2024-12-09 09:32:48.585061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.066 [2024-12-09 09:32:48.585341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.066 [2024-12-09 09:32:48.585363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.066 [2024-12-09 09:32:48.588715] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.066 [2024-12-09 09:32:48.588878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.066 [2024-12-09 09:32:48.588900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.066 [2024-12-09 09:32:48.592585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.066 [2024-12-09 09:32:48.592653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.066 [2024-12-09 09:32:48.592675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.066 [2024-12-09 09:32:48.596356] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.066 [2024-12-09 09:32:48.596558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.066 [2024-12-09 09:32:48.596580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.066 [2024-12-09 09:32:48.600232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.066 [2024-12-09 09:32:48.600413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.066 [2024-12-09 09:32:48.600436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.066 [2024-12-09 09:32:48.604132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.066 [2024-12-09 09:32:48.604297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.066 [2024-12-09 09:32:48.604320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.066 [2024-12-09 09:32:48.608088] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.066 [2024-12-09 09:32:48.608256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.066 [2024-12-09 09:32:48.608279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.066 [2024-12-09 09:32:48.612042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.066 [2024-12-09 09:32:48.612218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.066 [2024-12-09 09:32:48.612241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.066 [2024-12-09 09:32:48.615991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.066 [2024-12-09 09:32:48.616052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.066 [2024-12-09 09:32:48.616075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.066 [2024-12-09 09:32:48.619377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.066 [2024-12-09 09:32:48.619863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.066 [2024-12-09 09:32:48.619895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.066 [2024-12-09 09:32:48.623168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.066 [2024-12-09 09:32:48.623358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.066 [2024-12-09 09:32:48.623380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.066 [2024-12-09 09:32:48.627078] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.066 [2024-12-09 09:32:48.627138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.066 [2024-12-09 09:32:48.627161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.066 [2024-12-09 09:32:48.630813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.066 [2024-12-09 09:32:48.630875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.066 [2024-12-09 09:32:48.630897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.066 [2024-12-09 09:32:48.634574] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.066 [2024-12-09 09:32:48.634653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.066 [2024-12-09 09:32:48.634675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.066 [2024-12-09 09:32:48.638245] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.066 [2024-12-09 09:32:48.638409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.066 [2024-12-09 09:32:48.638431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.067 [2024-12-09 09:32:48.642076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.067 [2024-12-09 09:32:48.642270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.067 [2024-12-09 09:32:48.642292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.067 [2024-12-09 09:32:48.645974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.067 [2024-12-09 09:32:48.646151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.067 [2024-12-09 09:32:48.646174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.067 [2024-12-09 09:32:48.649535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.067 [2024-12-09 09:32:48.649815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.067 [2024-12-09 09:32:48.649843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.067 [2024-12-09 09:32:48.653152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.067 [2024-12-09 09:32:48.653314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.067 [2024-12-09 09:32:48.653336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.067 [2024-12-09 09:32:48.657048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.067 [2024-12-09 09:32:48.657221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.067 [2024-12-09 09:32:48.657243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.067 [2024-12-09 09:32:48.661022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.067 [2024-12-09 09:32:48.661080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.067 [2024-12-09 09:32:48.661103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.067 [2024-12-09 09:32:48.664791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.067 [2024-12-09 09:32:48.664862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.067 [2024-12-09 09:32:48.664885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.067 [2024-12-09 09:32:48.668557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.067 [2024-12-09 09:32:48.668695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.067 [2024-12-09 09:32:48.668717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.067 [2024-12-09 09:32:48.672278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.067 [2024-12-09 09:32:48.672454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.067 [2024-12-09 09:32:48.672492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.067 [2024-12-09 09:32:48.676225] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.067 [2024-12-09 09:32:48.676397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.067 [2024-12-09 09:32:48.676420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.067 [2024-12-09 09:32:48.680185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.067 [2024-12-09 09:32:48.680359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.067 [2024-12-09 09:32:48.680381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.067 [2024-12-09 09:32:48.683828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.067 [2024-12-09 09:32:48.684090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.067 [2024-12-09 09:32:48.684118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.067 [2024-12-09 09:32:48.687486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.067 [2024-12-09 09:32:48.687558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.067 [2024-12-09 09:32:48.687581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.067 [2024-12-09 09:32:48.691214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.067 [2024-12-09 09:32:48.691393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.067 [2024-12-09 09:32:48.691415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.067 [2024-12-09 09:32:48.695079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.067 [2024-12-09 09:32:48.695255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.067 [2024-12-09 09:32:48.695277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.067 [2024-12-09 09:32:48.699094] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.067 [2024-12-09 09:32:48.699266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.067 [2024-12-09 09:32:48.699288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.067 [2024-12-09 09:32:48.703024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.067 [2024-12-09 09:32:48.703151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.067 [2024-12-09 09:32:48.703175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.067 [2024-12-09 09:32:48.706951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.067 [2024-12-09 09:32:48.707138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.067 [2024-12-09 09:32:48.707340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.067 [2024-12-09 09:32:48.710841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.067 [2024-12-09 09:32:48.711032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.067 [2024-12-09 09:32:48.711197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.067 [2024-12-09 09:32:48.714673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.067 [2024-12-09 09:32:48.714915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.067 [2024-12-09 09:32:48.715165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.067 [2024-12-09 09:32:48.718352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.068 [2024-12-09 09:32:48.718650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.068 [2024-12-09 09:32:48.718815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.068 [2024-12-09 09:32:48.722219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.068 [2024-12-09 09:32:48.722408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.068 [2024-12-09 09:32:48.722579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.068 [2024-12-09 09:32:48.726211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.068 [2024-12-09 09:32:48.726400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.068 [2024-12-09 09:32:48.726573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.068 [2024-12-09 09:32:48.730071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.068 [2024-12-09 09:32:48.730260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.068 [2024-12-09 09:32:48.730422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.068 [2024-12-09 09:32:48.733961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.068 [2024-12-09 09:32:48.734155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.068 [2024-12-09 09:32:48.734315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.068 [2024-12-09 09:32:48.737878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.068 [2024-12-09 09:32:48.738071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.068 [2024-12-09 09:32:48.738232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.068 [2024-12-09 09:32:48.741794] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.068 [2024-12-09 09:32:48.742014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.068 [2024-12-09 09:32:48.742180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.068 [2024-12-09 09:32:48.745685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.068 [2024-12-09 09:32:48.745868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.068 [2024-12-09 09:32:48.746018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.068 [2024-12-09 09:32:48.749607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.068 [2024-12-09 09:32:48.749830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.068 [2024-12-09 09:32:48.749978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.068 [2024-12-09 09:32:48.753154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.068 [2024-12-09 09:32:48.753422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.068 [2024-12-09 09:32:48.753452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.068 [2024-12-09 09:32:48.756859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.068 [2024-12-09 09:32:48.756930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.068 [2024-12-09 09:32:48.756953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.068 [2024-12-09 09:32:48.760682] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.068 [2024-12-09 09:32:48.760739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.068 [2024-12-09 09:32:48.760762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.068 [2024-12-09 09:32:48.764412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.068 [2024-12-09 09:32:48.764623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.068 [2024-12-09 09:32:48.764645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.068 [2024-12-09 09:32:48.768320] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.068 [2024-12-09 09:32:48.768519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.068 [2024-12-09 09:32:48.768542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.068 [2024-12-09 09:32:48.772172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.068 [2024-12-09 09:32:48.772366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.068 [2024-12-09 09:32:48.772388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.068 [2024-12-09 09:32:48.776088] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.068 [2024-12-09 09:32:48.776258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.068 [2024-12-09 09:32:48.776280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.068 [2024-12-09 09:32:48.779911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.068 [2024-12-09 09:32:48.780078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.068 [2024-12-09 09:32:48.780101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.068 [2024-12-09 09:32:48.783842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.068 [2024-12-09 09:32:48.783902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.068 [2024-12-09 09:32:48.783924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.329 [2024-12-09 09:32:48.787206] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.329 [2024-12-09 09:32:48.787674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.329 [2024-12-09 09:32:48.787707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.329 [2024-12-09 09:32:48.791004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.329 [2024-12-09 09:32:48.791191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.329 [2024-12-09 09:32:48.791215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.329 [2024-12-09 09:32:48.794858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.329 [2024-12-09 09:32:48.794922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.329 [2024-12-09 09:32:48.794945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.329 [2024-12-09 09:32:48.798642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.329 [2024-12-09 09:32:48.798721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.329 [2024-12-09 09:32:48.798743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.329 [2024-12-09 09:32:48.802357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.329 [2024-12-09 09:32:48.802542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.329 [2024-12-09 09:32:48.802565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.329 [2024-12-09 09:32:48.806202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.329 [2024-12-09 09:32:48.806384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.329 [2024-12-09 09:32:48.806407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.329 [2024-12-09 09:32:48.810094] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.329 [2024-12-09 09:32:48.810294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.329 [2024-12-09 09:32:48.810316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.329 [2024-12-09 09:32:48.813925] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.329 [2024-12-09 09:32:48.814107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.329 [2024-12-09 09:32:48.814130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.329 [2024-12-09 09:32:48.817536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.329 [2024-12-09 09:32:48.817821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.329 [2024-12-09 09:32:48.817850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.329 [2024-12-09 09:32:48.821119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.329 [2024-12-09 09:32:48.821280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.329 [2024-12-09 09:32:48.821301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.329 [2024-12-09 09:32:48.824941] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.329 [2024-12-09 09:32:48.825007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.329 [2024-12-09 09:32:48.825028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.329 [2024-12-09 09:32:48.828737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.329 [2024-12-09 09:32:48.828795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.329 [2024-12-09 09:32:48.828817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.329 [2024-12-09 09:32:48.832453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.329 [2024-12-09 09:32:48.832526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.329 [2024-12-09 09:32:48.832548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.329 [2024-12-09 09:32:48.836215] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.329 [2024-12-09 09:32:48.836386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.329 [2024-12-09 09:32:48.836407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.329 [2024-12-09 09:32:48.840161] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.329 [2024-12-09 09:32:48.840372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.329 [2024-12-09 09:32:48.840394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.329 [2024-12-09 09:32:48.844107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.329 [2024-12-09 09:32:48.844338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.329 [2024-12-09 09:32:48.844360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.329 [2024-12-09 09:32:48.848003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.329 [2024-12-09 09:32:48.848181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.329 [2024-12-09 09:32:48.848203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.329 [2024-12-09 09:32:48.851656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.329 [2024-12-09 09:32:48.851920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.329 [2024-12-09 09:32:48.851949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.329 [2024-12-09 09:32:48.855212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.329 [2024-12-09 09:32:48.855378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.329 [2024-12-09 09:32:48.855401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.329 [2024-12-09 09:32:48.859107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.329 [2024-12-09 09:32:48.859275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.330 [2024-12-09 09:32:48.859298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.330 [2024-12-09 09:32:48.862986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.330 [2024-12-09 09:32:48.863046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.330 [2024-12-09 09:32:48.863069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.330 [2024-12-09 09:32:48.866753] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.330 [2024-12-09 09:32:48.866814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.330 [2024-12-09 09:32:48.866836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.330 [2024-12-09 09:32:48.870550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.330 [2024-12-09 09:32:48.870612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.330 [2024-12-09 09:32:48.870634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.330 [2024-12-09 09:32:48.874284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.330 [2024-12-09 09:32:48.874551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.330 [2024-12-09 09:32:48.874574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.330 [2024-12-09 09:32:48.878234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.330 [2024-12-09 09:32:48.878403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.330 [2024-12-09 09:32:48.878426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.330 [2024-12-09 09:32:48.882176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.330 [2024-12-09 09:32:48.882361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.330 [2024-12-09 09:32:48.882383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.330 [2024-12-09 09:32:48.885774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.330 [2024-12-09 09:32:48.886068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.330 [2024-12-09 09:32:48.886104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.330 [2024-12-09 09:32:48.889342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.330 [2024-12-09 09:32:48.889518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.330 [2024-12-09 09:32:48.889540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.330 [2024-12-09 09:32:48.893228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.330 [2024-12-09 09:32:48.893403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.330 [2024-12-09 09:32:48.893424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.330 [2024-12-09 09:32:48.897116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.330 [2024-12-09 09:32:48.897280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.330 [2024-12-09 09:32:48.897302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.330 [2024-12-09 09:32:48.901020] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.330 [2024-12-09 09:32:48.901081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.330 [2024-12-09 09:32:48.901103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.330 [2024-12-09 09:32:48.904795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.330 [2024-12-09 09:32:48.904888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.330 [2024-12-09 09:32:48.904910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.330 [2024-12-09 09:32:48.908509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.330 [2024-12-09 09:32:48.908645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.330 [2024-12-09 09:32:48.908667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.330 [2024-12-09 09:32:48.912175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.330 [2024-12-09 09:32:48.912343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.330 [2024-12-09 09:32:48.912366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.330 [2024-12-09 09:32:48.916083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.330 [2024-12-09 09:32:48.916270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.330 [2024-12-09 09:32:48.916293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.330 [2024-12-09 09:32:48.919671] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.330 [2024-12-09 09:32:48.919947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.330 [2024-12-09 09:32:48.919976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.330 [2024-12-09 09:32:48.923284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.330 [2024-12-09 09:32:48.923350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.330 [2024-12-09 09:32:48.923372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.330 [2024-12-09 09:32:48.927097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.330 [2024-12-09 09:32:48.927270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.330 [2024-12-09 09:32:48.927292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.330 [2024-12-09 09:32:48.930973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.330 [2024-12-09 09:32:48.931034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.330 [2024-12-09 09:32:48.931057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.330 [2024-12-09 09:32:48.934774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.330 [2024-12-09 09:32:48.934903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.330 [2024-12-09 09:32:48.934925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.330 [2024-12-09 09:32:48.938527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.330 [2024-12-09 09:32:48.938607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.330 [2024-12-09 09:32:48.938630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.330 [2024-12-09 09:32:48.942371] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.330 [2024-12-09 09:32:48.942578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.330 [2024-12-09 09:32:48.942600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.330 [2024-12-09 09:32:48.946331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.330 [2024-12-09 09:32:48.946538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.330 [2024-12-09 09:32:48.946560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.330 [2024-12-09 09:32:48.950222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.330 [2024-12-09 09:32:48.950392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.330 [2024-12-09 09:32:48.950414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.330 [2024-12-09 09:32:48.953740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.330 [2024-12-09 09:32:48.953992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.330 [2024-12-09 09:32:48.954014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.330 [2024-12-09 09:32:48.957302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.330 [2024-12-09 09:32:48.957479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.330 [2024-12-09 09:32:48.957501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.330 [2024-12-09 09:32:48.961135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.331 [2024-12-09 09:32:48.961293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.331 [2024-12-09 09:32:48.961315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.331 [2024-12-09 09:32:48.965005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.331 [2024-12-09 09:32:48.965172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.331 [2024-12-09 09:32:48.965194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.331 [2024-12-09 09:32:48.969091] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.331 [2024-12-09 09:32:48.969296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.331 [2024-12-09 09:32:48.969623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.331 [2024-12-09 09:32:48.973137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.331 [2024-12-09 09:32:48.973350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.331 [2024-12-09 09:32:48.973614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.331 [2024-12-09 09:32:48.977177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.331 [2024-12-09 09:32:48.977365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.331 [2024-12-09 09:32:48.977587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.331 [2024-12-09 09:32:48.981231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.331 [2024-12-09 09:32:48.981449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.331 [2024-12-09 09:32:48.981656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.331 [2024-12-09 09:32:48.985142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.331 [2024-12-09 09:32:48.985332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.331 [2024-12-09 09:32:48.985550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.331 [2024-12-09 09:32:48.988736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.331 [2024-12-09 09:32:48.988983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.331 [2024-12-09 09:32:48.989193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.331 [2024-12-09 09:32:48.992526] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.331 [2024-12-09 09:32:48.992715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.331 [2024-12-09 09:32:48.992879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.331 [2024-12-09 09:32:48.996410] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.331 [2024-12-09 09:32:48.996611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.331 [2024-12-09 09:32:48.996759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.331 [2024-12-09 09:32:49.000268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.331 [2024-12-09 09:32:49.000474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.331 [2024-12-09 09:32:49.000638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.331 [2024-12-09 09:32:49.004294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.331 [2024-12-09 09:32:49.004496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.331 [2024-12-09 09:32:49.004519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.331 [2024-12-09 09:32:49.008217] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.331 [2024-12-09 09:32:49.008387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.331 [2024-12-09 09:32:49.008412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.331 [2024-12-09 09:32:49.012113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.331 [2024-12-09 09:32:49.012302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.331 [2024-12-09 09:32:49.012325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.331 [2024-12-09 09:32:49.016063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.331 [2024-12-09 09:32:49.016123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.331 [2024-12-09 09:32:49.016146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.331 [2024-12-09 09:32:49.019566] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.331 [2024-12-09 09:32:49.020041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.331 [2024-12-09 09:32:49.020202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.331 [2024-12-09 09:32:49.023578] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.331 [2024-12-09 09:32:49.023788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.331 [2024-12-09 09:32:49.023996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.331 [2024-12-09 09:32:49.027602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.331 [2024-12-09 09:32:49.027803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.331 [2024-12-09 09:32:49.027985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.331 [2024-12-09 09:32:49.031541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.331 [2024-12-09 09:32:49.031738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.331 [2024-12-09 09:32:49.031883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.331 [2024-12-09 09:32:49.035471] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.331 [2024-12-09 09:32:49.035659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.331 [2024-12-09 09:32:49.035963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.331 [2024-12-09 09:32:49.039688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.331 [2024-12-09 09:32:49.039876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.331 [2024-12-09 09:32:49.040062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.331 [2024-12-09 09:32:49.043649] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.331 [2024-12-09 09:32:49.043835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.331 [2024-12-09 09:32:49.043999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.331 [2024-12-09 09:32:49.047678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.331 [2024-12-09 09:32:49.047863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.331 [2024-12-09 09:32:49.048019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.592 [2024-12-09 09:32:49.051858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.592 [2024-12-09 09:32:49.052083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.592 [2024-12-09 09:32:49.052249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.592 [2024-12-09 09:32:49.055844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.592 [2024-12-09 09:32:49.056082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.592 [2024-12-09 09:32:49.056288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.592 [2024-12-09 09:32:49.059546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.592 [2024-12-09 09:32:49.059827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.592 [2024-12-09 09:32:49.059990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.592 [2024-12-09 09:32:49.063266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.592 [2024-12-09 09:32:49.063431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.592 [2024-12-09 09:32:49.063455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.592 [2024-12-09 09:32:49.067193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.592 [2024-12-09 09:32:49.067362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.592 [2024-12-09 09:32:49.067387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.592 [2024-12-09 09:32:49.071024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.592 [2024-12-09 09:32:49.071206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.592 [2024-12-09 09:32:49.071231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.592 [2024-12-09 09:32:49.074958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.592 [2024-12-09 09:32:49.075030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.592 [2024-12-09 09:32:49.075054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.592 [2024-12-09 09:32:49.078711] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.592 [2024-12-09 09:32:49.078792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.592 [2024-12-09 09:32:49.078815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.592 [2024-12-09 09:32:49.082424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.592 [2024-12-09 09:32:49.082617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.592 [2024-12-09 09:32:49.082639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.592 [2024-12-09 09:32:49.086367] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.592 [2024-12-09 09:32:49.086550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.592 [2024-12-09 09:32:49.086573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.592 [2024-12-09 09:32:49.090292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.592 [2024-12-09 09:32:49.090484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.592 [2024-12-09 09:32:49.090506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.592 [2024-12-09 09:32:49.093821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.592 [2024-12-09 09:32:49.094090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.592 [2024-12-09 09:32:49.094112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.592 [2024-12-09 09:32:49.097600] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.592 [2024-12-09 09:32:49.097668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.592 [2024-12-09 09:32:49.097691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.592 [2024-12-09 09:32:49.101559] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.592 [2024-12-09 09:32:49.101620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.592 [2024-12-09 09:32:49.101642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.592 [2024-12-09 09:32:49.105280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.592 [2024-12-09 09:32:49.105446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.592 [2024-12-09 09:32:49.105485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.592 [2024-12-09 09:32:49.109181] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.592 [2024-12-09 09:32:49.109351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.592 [2024-12-09 09:32:49.109373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.592 [2024-12-09 09:32:49.113069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.592 [2024-12-09 09:32:49.113237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.592 [2024-12-09 09:32:49.113259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.592 [2024-12-09 09:32:49.116938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.592 [2024-12-09 09:32:49.117087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.592 [2024-12-09 09:32:49.117117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.592 [2024-12-09 09:32:49.120674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.592 [2024-12-09 09:32:49.120757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.592 [2024-12-09 09:32:49.120780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.592 [2024-12-09 09:32:49.124413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.592 [2024-12-09 09:32:49.124610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.592 [2024-12-09 09:32:49.124632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.592 [2024-12-09 09:32:49.128022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.592 [2024-12-09 09:32:49.128300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.592 [2024-12-09 09:32:49.128322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.592 [2024-12-09 09:32:49.131765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.592 [2024-12-09 09:32:49.131877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.592 [2024-12-09 09:32:49.132107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.592 [2024-12-09 09:32:49.135720] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.592 [2024-12-09 09:32:49.135906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.593 [2024-12-09 09:32:49.136092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.593 [2024-12-09 09:32:49.139666] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.593 [2024-12-09 09:32:49.139734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.593 [2024-12-09 09:32:49.139757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.593 [2024-12-09 09:32:49.143410] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.593 [2024-12-09 09:32:49.143584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.593 [2024-12-09 09:32:49.143607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.593 [2024-12-09 09:32:49.147331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.593 [2024-12-09 09:32:49.147540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.593 [2024-12-09 09:32:49.147563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.593 [2024-12-09 09:32:49.151151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.593 [2024-12-09 09:32:49.151346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.593 [2024-12-09 09:32:49.151368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.593 [2024-12-09 09:32:49.155059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.593 [2024-12-09 09:32:49.155222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.593 [2024-12-09 09:32:49.155245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.593 [2024-12-09 09:32:49.158922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.593 [2024-12-09 09:32:49.159092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.593 [2024-12-09 09:32:49.159114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.593 [2024-12-09 09:32:49.162557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.593 [2024-12-09 09:32:49.162827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.593 [2024-12-09 09:32:49.162856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.593 [2024-12-09 09:32:49.166169] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.593 [2024-12-09 09:32:49.166334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.593 [2024-12-09 09:32:49.166356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.593 [2024-12-09 09:32:49.170111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.593 [2024-12-09 09:32:49.170285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.593 [2024-12-09 09:32:49.170308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.593 [2024-12-09 09:32:49.174082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.593 [2024-12-09 09:32:49.174244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.593 [2024-12-09 09:32:49.174266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.593 [2024-12-09 09:32:49.178007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.593 [2024-12-09 09:32:49.178092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.593 [2024-12-09 09:32:49.178114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.593 [2024-12-09 09:32:49.181774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.593 [2024-12-09 09:32:49.181854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.593 [2024-12-09 09:32:49.181876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.593 [2024-12-09 09:32:49.185524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.593 [2024-12-09 09:32:49.185586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.593 [2024-12-09 09:32:49.185608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.593 [2024-12-09 09:32:49.189293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.593 [2024-12-09 09:32:49.189472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.593 [2024-12-09 09:32:49.189496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.593 [2024-12-09 09:32:49.193266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.593 [2024-12-09 09:32:49.193443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.593 [2024-12-09 09:32:49.193481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.593 [2024-12-09 09:32:49.196882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.593 [2024-12-09 09:32:49.197145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.593 [2024-12-09 09:32:49.197175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.593 [2024-12-09 09:32:49.200770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.593 [2024-12-09 09:32:49.201258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.593 [2024-12-09 09:32:49.201291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.593 [2024-12-09 09:32:49.204555] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.593 [2024-12-09 09:32:49.204633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.593 [2024-12-09 09:32:49.204656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.593 [2024-12-09 09:32:49.208263] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.593 [2024-12-09 09:32:49.208434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.593 [2024-12-09 09:32:49.208456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.593 [2024-12-09 09:32:49.212192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.593 [2024-12-09 09:32:49.212355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.593 [2024-12-09 09:32:49.212377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.593 [2024-12-09 09:32:49.216039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.593 [2024-12-09 09:32:49.216233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.593 [2024-12-09 09:32:49.216254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.593 [2024-12-09 09:32:49.219928] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.593 [2024-12-09 09:32:49.220007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.593 [2024-12-09 09:32:49.220030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.593 [2024-12-09 09:32:49.223599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.593 [2024-12-09 09:32:49.223742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.593 [2024-12-09 09:32:49.223765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.593 [2024-12-09 09:32:49.227277] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.593 [2024-12-09 09:32:49.227446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.593 [2024-12-09 09:32:49.227484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.593 [2024-12-09 09:32:49.231313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.593 [2024-12-09 09:32:49.231493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.593 [2024-12-09 09:32:49.231516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.593 [2024-12-09 09:32:49.234876] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.593 [2024-12-09 09:32:49.235148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.593 [2024-12-09 09:32:49.235170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.593 [2024-12-09 09:32:49.238473] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.594 [2024-12-09 09:32:49.238536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.594 [2024-12-09 09:32:49.238558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.594 [2024-12-09 09:32:49.242179] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.594 [2024-12-09 09:32:49.242350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.594 [2024-12-09 09:32:49.242371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.594 [2024-12-09 09:32:49.246037] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.594 [2024-12-09 09:32:49.246222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.594 [2024-12-09 09:32:49.246243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.594 [2024-12-09 09:32:49.249844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.594 [2024-12-09 09:32:49.249904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.594 [2024-12-09 09:32:49.249926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.594 [2024-12-09 09:32:49.253502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.594 [2024-12-09 09:32:49.253564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.594 [2024-12-09 09:32:49.253585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.594 [2024-12-09 09:32:49.257212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.594 [2024-12-09 09:32:49.257376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.594 [2024-12-09 09:32:49.257398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.594 [2024-12-09 09:32:49.261109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.594 [2024-12-09 09:32:49.261295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.594 [2024-12-09 09:32:49.261318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.594 [2024-12-09 09:32:49.265044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.594 [2024-12-09 09:32:49.265226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.594 [2024-12-09 09:32:49.265248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.594 [2024-12-09 09:32:49.268678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.594 [2024-12-09 09:32:49.268967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.594 [2024-12-09 09:32:49.268999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.594 [2024-12-09 09:32:49.272339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.594 [2024-12-09 09:32:49.272512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.594 [2024-12-09 09:32:49.272535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.594 [2024-12-09 09:32:49.276402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.594 [2024-12-09 09:32:49.276603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.594 [2024-12-09 09:32:49.276627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.594 [2024-12-09 09:32:49.280240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.594 [2024-12-09 09:32:49.280406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.594 [2024-12-09 09:32:49.280428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.594 [2024-12-09 09:32:49.284100] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.594 [2024-12-09 09:32:49.284257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.594 [2024-12-09 09:32:49.284279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.594 [2024-12-09 09:32:49.288007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.594 [2024-12-09 09:32:49.288203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.594 [2024-12-09 09:32:49.288226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.594 [2024-12-09 09:32:49.291890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.594 [2024-12-09 09:32:49.291990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.594 [2024-12-09 09:32:49.292013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.594 [2024-12-09 09:32:49.295668] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.594 [2024-12-09 09:32:49.295774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.594 [2024-12-09 09:32:49.295796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.594 [2024-12-09 09:32:49.299546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.594 [2024-12-09 09:32:49.299708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.594 [2024-12-09 09:32:49.299731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.594 [2024-12-09 09:32:49.302893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.594 [2024-12-09 09:32:49.303158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.594 [2024-12-09 09:32:49.303180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.594 [2024-12-09 09:32:49.306573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.594 [2024-12-09 09:32:49.306636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.594 [2024-12-09 09:32:49.306659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.854 [2024-12-09 09:32:49.310265] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.854 [2024-12-09 09:32:49.310442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.854 [2024-12-09 09:32:49.310479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.854 [2024-12-09 09:32:49.314104] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.854 [2024-12-09 09:32:49.314279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.854 [2024-12-09 09:32:49.314301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.854 [2024-12-09 09:32:49.318013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.854 [2024-12-09 09:32:49.318191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.854 [2024-12-09 09:32:49.318214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.854 [2024-12-09 09:32:49.321890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.854 [2024-12-09 09:32:49.322061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.854 [2024-12-09 09:32:49.322083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.854 [2024-12-09 09:32:49.325639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.854 [2024-12-09 09:32:49.325777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.854 [2024-12-09 09:32:49.325800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.854 [2024-12-09 09:32:49.329375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.854 [2024-12-09 09:32:49.329552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.854 [2024-12-09 09:32:49.329575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.854 [2024-12-09 09:32:49.333316] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.854 [2024-12-09 09:32:49.333510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.854 [2024-12-09 09:32:49.333532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.854 [2024-12-09 09:32:49.336928] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.854 [2024-12-09 09:32:49.337200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.854 [2024-12-09 09:32:49.337234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.854 [2024-12-09 09:32:49.340600] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.854 [2024-12-09 09:32:49.340662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.854 [2024-12-09 09:32:49.340685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.854 [2024-12-09 09:32:49.344375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.854 [2024-12-09 09:32:49.344550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.854 [2024-12-09 09:32:49.344573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.854 [2024-12-09 09:32:49.348255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.854 [2024-12-09 09:32:49.348428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.854 [2024-12-09 09:32:49.348450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.854 [2024-12-09 09:32:49.352143] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.855 [2024-12-09 09:32:49.352313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.855 [2024-12-09 09:32:49.352335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.855 [2024-12-09 09:32:49.356048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.855 [2024-12-09 09:32:49.356238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.855 [2024-12-09 09:32:49.356261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.855 [2024-12-09 09:32:49.360006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.855 [2024-12-09 09:32:49.360065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.855 [2024-12-09 09:32:49.360088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.855 [2024-12-09 09:32:49.363738] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.855 [2024-12-09 09:32:49.363804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.855 [2024-12-09 09:32:49.363826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.855 [2024-12-09 09:32:49.367535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.855 [2024-12-09 09:32:49.367593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.855 [2024-12-09 09:32:49.367616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.855 [2024-12-09 09:32:49.370892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.855 [2024-12-09 09:32:49.371347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.855 [2024-12-09 09:32:49.371380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.855 [2024-12-09 09:32:49.374683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.855 [2024-12-09 09:32:49.374768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.855 [2024-12-09 09:32:49.374790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.855 [2024-12-09 09:32:49.378407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.855 [2024-12-09 09:32:49.378640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.855 [2024-12-09 09:32:49.378662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.855 [2024-12-09 09:32:49.382170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.855 [2024-12-09 09:32:49.382334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.855 [2024-12-09 09:32:49.382355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.855 [2024-12-09 09:32:49.386087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.855 [2024-12-09 09:32:49.386272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.855 [2024-12-09 09:32:49.386293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.855 [2024-12-09 09:32:49.390029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.855 [2024-12-09 09:32:49.390206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.855 [2024-12-09 09:32:49.390229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.855 [2024-12-09 09:32:49.393939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.855 [2024-12-09 09:32:49.394024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.855 [2024-12-09 09:32:49.394057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.855 [2024-12-09 09:32:49.397722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.855 [2024-12-09 09:32:49.397782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.855 [2024-12-09 09:32:49.397804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.855 [2024-12-09 09:32:49.401120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.855 [2024-12-09 09:32:49.401594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.855 [2024-12-09 09:32:49.401626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.855 [2024-12-09 09:32:49.404835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.855 [2024-12-09 09:32:49.404929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.855 [2024-12-09 09:32:49.404951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.855 [2024-12-09 09:32:49.408608] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.855 [2024-12-09 09:32:49.408664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.855 [2024-12-09 09:32:49.408686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.855 [2024-12-09 09:32:49.412308] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.855 [2024-12-09 09:32:49.412493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.855 [2024-12-09 09:32:49.412515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.855 [2024-12-09 09:32:49.416178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.855 [2024-12-09 09:32:49.416347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.855 [2024-12-09 09:32:49.416369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.855 [2024-12-09 09:32:49.420162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.855 [2024-12-09 09:32:49.420329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.855 [2024-12-09 09:32:49.420352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.855 [2024-12-09 09:32:49.424051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.855 [2024-12-09 09:32:49.424262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.855 [2024-12-09 09:32:49.424284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.855 [2024-12-09 09:32:49.427914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.855 [2024-12-09 09:32:49.428053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.855 [2024-12-09 09:32:49.428082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.855 [2024-12-09 09:32:49.431339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.855 [2024-12-09 09:32:49.431625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.855 [2024-12-09 09:32:49.431713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.855 [2024-12-09 09:32:49.435066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.855 [2024-12-09 09:32:49.435236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.855 [2024-12-09 09:32:49.435260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.855 [2024-12-09 09:32:49.439087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.855 [2024-12-09 09:32:49.439254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.855 [2024-12-09 09:32:49.439277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.855 [2024-12-09 09:32:49.443025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.855 [2024-12-09 09:32:49.443088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.855 [2024-12-09 09:32:49.443111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.855 [2024-12-09 09:32:49.446773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.855 [2024-12-09 09:32:49.446836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.855 [2024-12-09 09:32:49.446858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.855 [2024-12-09 09:32:49.450632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.855 [2024-12-09 09:32:49.450704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.855 [2024-12-09 09:32:49.450727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.855 [2024-12-09 09:32:49.454376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.856 [2024-12-09 09:32:49.454555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.856 [2024-12-09 09:32:49.454578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.856 8105.50 IOPS, 1013.19 MiB/s [2024-12-09T09:32:49.579Z] [2024-12-09 09:32:49.459493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9d1d10) with pdu=0x200016eff3c8 00:22:11.856 [2024-12-09 09:32:49.459599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.856 [2024-12-09 09:32:49.459621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.856 00:22:11.856 Latency(us) 00:22:11.856 [2024-12-09T09:32:49.579Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:11.856 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:22:11.856 nvme0n1 : 2.00 8102.19 1012.77 0.00 0.00 1970.52 1329.14 4974.42 00:22:11.856 [2024-12-09T09:32:49.579Z] =================================================================================================================== 00:22:11.856 [2024-12-09T09:32:49.579Z] Total : 8102.19 1012.77 0.00 0.00 1970.52 1329.14 4974.42 00:22:11.856 { 00:22:11.856 "results": [ 00:22:11.856 { 00:22:11.856 "job": "nvme0n1", 00:22:11.856 "core_mask": "0x2", 00:22:11.856 "workload": "randwrite", 00:22:11.856 "status": "finished", 00:22:11.856 "queue_depth": 16, 00:22:11.856 "io_size": 131072, 00:22:11.856 "runtime": 2.003163, 00:22:11.856 "iops": 8102.186392220703, 00:22:11.856 "mibps": 1012.7732990275879, 00:22:11.856 "io_failed": 0, 00:22:11.856 "io_timeout": 0, 00:22:11.856 "avg_latency_us": 1970.5245877657273, 00:22:11.856 "min_latency_us": 1329.1437751004016, 00:22:11.856 "max_latency_us": 4974.419277108434 00:22:11.856 } 00:22:11.856 ], 00:22:11.856 "core_count": 1 00:22:11.856 } 00:22:11.856 09:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:11.856 09:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:11.856 09:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:11.856 09:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:11.856 | .driver_specific 00:22:11.856 | .nvme_error 00:22:11.856 | .status_code 00:22:11.856 | .command_transient_transport_error' 00:22:12.115 09:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 524 > 0 )) 00:22:12.115 09:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80181 00:22:12.115 09:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80181 ']' 00:22:12.115 09:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80181 00:22:12.115 09:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:22:12.115 09:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:12.115 09:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80181 00:22:12.115 killing process with pid 80181 00:22:12.115 Received shutdown signal, test time was about 2.000000 seconds 00:22:12.115 00:22:12.115 Latency(us) 00:22:12.115 [2024-12-09T09:32:49.838Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:12.115 [2024-12-09T09:32:49.838Z] =================================================================================================================== 00:22:12.115 [2024-12-09T09:32:49.838Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:12.115 09:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:12.115 09:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:12.115 09:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80181' 00:22:12.115 09:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80181 00:22:12.115 09:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80181 00:22:12.375 09:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 79977 00:22:12.375 09:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 79977 ']' 00:22:12.375 09:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 79977 00:22:12.375 09:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:22:12.375 09:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:12.375 09:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79977 00:22:12.375 killing process with pid 79977 00:22:12.375 09:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:12.375 09:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:12.375 09:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79977' 00:22:12.375 09:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 79977 00:22:12.375 09:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 79977 00:22:12.634 00:22:12.634 real 0m17.401s 00:22:12.634 user 0m32.405s 00:22:12.634 sys 0m5.338s 00:22:12.634 09:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:12.634 ************************************ 00:22:12.634 END TEST nvmf_digest_error 00:22:12.634 ************************************ 00:22:12.634 09:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:12.635 09:32:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:22:12.635 09:32:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:22:12.635 09:32:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:12.635 09:32:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:22:12.635 09:32:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:12.635 09:32:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:22:12.635 09:32:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:12.635 09:32:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:12.635 rmmod nvme_tcp 00:22:12.635 rmmod nvme_fabrics 00:22:12.635 rmmod nvme_keyring 00:22:12.635 09:32:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:12.635 09:32:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:22:12.635 09:32:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:22:12.635 09:32:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 79977 ']' 00:22:12.635 09:32:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 79977 00:22:12.635 Process with pid 79977 is not found 00:22:12.635 09:32:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 79977 ']' 00:22:12.635 09:32:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 79977 00:22:12.635 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79977) - No such process 00:22:12.635 09:32:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 79977 is not found' 00:22:12.635 09:32:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:12.635 09:32:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:12.635 09:32:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:12.635 09:32:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:22:12.635 09:32:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:22:12.635 09:32:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:12.635 09:32:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:22:12.635 09:32:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:12.635 09:32:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:12.635 09:32:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:12.894 09:32:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:12.894 09:32:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:12.894 09:32:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:12.894 09:32:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:12.894 09:32:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:12.894 09:32:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:12.894 09:32:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:12.894 09:32:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:12.894 09:32:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:12.894 09:32:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:12.894 09:32:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:12.894 09:32:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:12.894 09:32:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:12.894 09:32:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:12.894 09:32:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:12.894 09:32:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:13.152 09:32:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:22:13.152 00:22:13.152 real 0m36.587s 00:22:13.152 user 1m6.223s 00:22:13.152 sys 0m11.328s 00:22:13.152 09:32:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:13.152 09:32:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:22:13.152 ************************************ 00:22:13.152 END TEST nvmf_digest 00:22:13.152 ************************************ 00:22:13.152 09:32:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:22:13.152 09:32:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:22:13.152 09:32:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:22:13.152 09:32:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:13.152 09:32:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:13.152 09:32:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.152 ************************************ 00:22:13.152 START TEST nvmf_host_multipath 00:22:13.152 ************************************ 00:22:13.152 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:22:13.152 * Looking for test storage... 00:22:13.152 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:13.152 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:13.152 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:22:13.152 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:13.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.411 --rc genhtml_branch_coverage=1 00:22:13.411 --rc genhtml_function_coverage=1 00:22:13.411 --rc genhtml_legend=1 00:22:13.411 --rc geninfo_all_blocks=1 00:22:13.411 --rc geninfo_unexecuted_blocks=1 00:22:13.411 00:22:13.411 ' 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:13.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.411 --rc genhtml_branch_coverage=1 00:22:13.411 --rc genhtml_function_coverage=1 00:22:13.411 --rc genhtml_legend=1 00:22:13.411 --rc geninfo_all_blocks=1 00:22:13.411 --rc geninfo_unexecuted_blocks=1 00:22:13.411 00:22:13.411 ' 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:13.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.411 --rc genhtml_branch_coverage=1 00:22:13.411 --rc genhtml_function_coverage=1 00:22:13.411 --rc genhtml_legend=1 00:22:13.411 --rc geninfo_all_blocks=1 00:22:13.411 --rc geninfo_unexecuted_blocks=1 00:22:13.411 00:22:13.411 ' 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:13.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.411 --rc genhtml_branch_coverage=1 00:22:13.411 --rc genhtml_function_coverage=1 00:22:13.411 --rc genhtml_legend=1 00:22:13.411 --rc geninfo_all_blocks=1 00:22:13.411 --rc geninfo_unexecuted_blocks=1 00:22:13.411 00:22:13.411 ' 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.411 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:22:13.412 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:13.412 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:13.412 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:13.412 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:13.412 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:13.412 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:13.412 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:13.412 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:13.412 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:13.412 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:13.412 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:13.412 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:13.412 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:13.412 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:22:13.412 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:13.412 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:22:13.412 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:22:13.412 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:13.412 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:13.412 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:13.412 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:13.412 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:13.412 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:13.412 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:13.412 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:13.412 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:13.412 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:13.412 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:13.412 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:13.412 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:13.412 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:13.412 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:13.412 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:13.412 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:13.412 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:13.412 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:13.412 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:13.412 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:13.412 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:13.412 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:13.412 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:13.412 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:13.412 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:13.412 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:13.412 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:13.412 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:13.412 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:13.412 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:13.412 Cannot find device "nvmf_init_br" 00:22:13.412 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:22:13.412 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:13.412 Cannot find device "nvmf_init_br2" 00:22:13.412 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:22:13.412 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:13.412 Cannot find device "nvmf_tgt_br" 00:22:13.412 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:22:13.412 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:13.412 Cannot find device "nvmf_tgt_br2" 00:22:13.412 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:22:13.412 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:13.412 Cannot find device "nvmf_init_br" 00:22:13.412 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:22:13.412 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:13.412 Cannot find device "nvmf_init_br2" 00:22:13.412 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:22:13.412 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:13.412 Cannot find device "nvmf_tgt_br" 00:22:13.670 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:22:13.671 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:13.671 Cannot find device "nvmf_tgt_br2" 00:22:13.671 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:22:13.671 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:13.671 Cannot find device "nvmf_br" 00:22:13.671 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:22:13.671 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:13.671 Cannot find device "nvmf_init_if" 00:22:13.671 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:22:13.671 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:13.671 Cannot find device "nvmf_init_if2" 00:22:13.671 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:22:13.671 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:13.671 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:13.671 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:22:13.671 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:13.671 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:13.671 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:22:13.671 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:13.671 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:13.671 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:13.671 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:13.671 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:13.671 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:13.671 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:13.671 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:13.671 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:13.671 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:13.671 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:13.671 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:13.671 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:13.671 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:13.671 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:13.671 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:13.671 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:13.671 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:13.671 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:13.671 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:13.671 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:13.671 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:13.671 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:13.928 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:13.928 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:13.928 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:13.928 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:13.928 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:13.928 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:13.928 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:13.928 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:13.928 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:13.928 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:13.928 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:13.928 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:22:13.928 00:22:13.928 --- 10.0.0.3 ping statistics --- 00:22:13.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.928 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:22:13.928 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:13.928 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:13.928 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:22:13.928 00:22:13.928 --- 10.0.0.4 ping statistics --- 00:22:13.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.928 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:22:13.929 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:13.929 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:13.929 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:22:13.929 00:22:13.929 --- 10.0.0.1 ping statistics --- 00:22:13.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.929 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:22:13.929 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:13.929 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:13.929 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:22:13.929 00:22:13.929 --- 10.0.0.2 ping statistics --- 00:22:13.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.929 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:22:13.929 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:13.929 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:22:13.929 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:13.929 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:13.929 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:13.929 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:13.929 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:13.929 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:13.929 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:13.929 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:22:13.929 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:13.929 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:13.929 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:13.929 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=80506 00:22:13.929 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 80506 00:22:13.929 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:13.929 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 80506 ']' 00:22:13.929 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:13.929 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:13.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:13.929 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:13.929 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:13.929 09:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:13.929 [2024-12-09 09:32:51.591490] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:22:13.929 [2024-12-09 09:32:51.591552] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:14.187 [2024-12-09 09:32:51.744943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:14.187 [2024-12-09 09:32:51.789588] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:14.187 [2024-12-09 09:32:51.789815] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:14.187 [2024-12-09 09:32:51.790188] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:14.187 [2024-12-09 09:32:51.790277] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:14.187 [2024-12-09 09:32:51.790305] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:14.187 [2024-12-09 09:32:51.791343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:14.187 [2024-12-09 09:32:51.791343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:14.187 [2024-12-09 09:32:51.833140] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:14.755 09:32:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:14.755 09:32:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:22:14.755 09:32:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:14.755 09:32:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:14.755 09:32:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:15.012 09:32:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:15.012 09:32:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=80506 00:22:15.012 09:32:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:15.270 [2024-12-09 09:32:52.790398] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:15.270 09:32:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:15.528 Malloc0 00:22:15.528 09:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:22:15.786 09:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:15.786 09:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:16.044 [2024-12-09 09:32:53.635682] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:16.044 09:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:16.302 [2024-12-09 09:32:53.907402] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:22:16.302 09:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=80556 00:22:16.302 09:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:22:16.302 09:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:16.302 09:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 80556 /var/tmp/bdevperf.sock 00:22:16.302 09:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 80556 ']' 00:22:16.302 09:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:16.302 09:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:16.302 09:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:16.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:16.302 09:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:16.302 09:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:17.235 09:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:17.235 09:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:22:17.235 09:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:17.495 09:32:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:17.755 Nvme0n1 00:22:17.755 09:32:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:18.027 Nvme0n1 00:22:18.027 09:32:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:22:18.027 09:32:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:19.400 09:32:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:22:19.400 09:32:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:19.400 09:32:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:19.400 09:32:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:22:19.400 09:32:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80601 00:22:19.400 09:32:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80506 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:19.400 09:32:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:25.966 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:25.966 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:25.966 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:25.966 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:25.966 Attaching 4 probes... 00:22:25.966 @path[10.0.0.3, 4421]: 21563 00:22:25.966 @path[10.0.0.3, 4421]: 21996 00:22:25.966 @path[10.0.0.3, 4421]: 21956 00:22:25.966 @path[10.0.0.3, 4421]: 21958 00:22:25.966 @path[10.0.0.3, 4421]: 21190 00:22:25.966 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:25.966 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:25.966 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:25.966 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:25.966 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:25.966 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:25.966 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80601 00:22:25.966 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:25.966 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:22:25.966 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:25.966 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:22:26.225 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:22:26.225 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80721 00:22:26.225 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:26.225 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80506 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:32.786 09:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:32.786 09:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:22:32.786 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:22:32.786 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:32.786 Attaching 4 probes... 00:22:32.786 @path[10.0.0.3, 4420]: 18034 00:22:32.786 @path[10.0.0.3, 4420]: 19249 00:22:32.786 @path[10.0.0.3, 4420]: 21109 00:22:32.786 @path[10.0.0.3, 4420]: 20665 00:22:32.786 @path[10.0.0.3, 4420]: 19423 00:22:32.786 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:32.786 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:32.786 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:32.786 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:22:32.786 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:22:32.786 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:22:32.786 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80721 00:22:32.786 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:32.786 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:22:32.786 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:22:32.786 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:33.045 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:22:33.045 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80829 00:22:33.045 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80506 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:33.045 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:39.608 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:39.608 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:39.608 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:39.608 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:39.608 Attaching 4 probes... 00:22:39.608 @path[10.0.0.3, 4421]: 13310 00:22:39.608 @path[10.0.0.3, 4421]: 16972 00:22:39.608 @path[10.0.0.3, 4421]: 20623 00:22:39.608 @path[10.0.0.3, 4421]: 23211 00:22:39.608 @path[10.0.0.3, 4421]: 22979 00:22:39.608 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:39.608 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:39.608 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:39.608 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:39.608 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:39.608 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:39.608 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80829 00:22:39.608 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:39.608 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:22:39.608 09:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:22:39.608 09:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:22:39.608 09:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:22:39.608 09:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80506 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:39.608 09:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80946 00:22:39.608 09:33:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:46.175 09:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:46.175 09:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:22:46.175 09:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:22:46.175 09:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:46.175 Attaching 4 probes... 00:22:46.175 00:22:46.175 00:22:46.175 00:22:46.175 00:22:46.175 00:22:46.175 09:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:46.175 09:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:46.175 09:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:46.175 09:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:22:46.175 09:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:22:46.175 09:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:22:46.175 09:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80946 00:22:46.175 09:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:46.175 09:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:22:46.175 09:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:46.175 09:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:46.433 09:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:22:46.433 09:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81058 00:22:46.433 09:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:46.433 09:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80506 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:52.995 09:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:52.995 09:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:52.995 09:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:52.995 09:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:52.995 Attaching 4 probes... 00:22:52.995 @path[10.0.0.3, 4421]: 22591 00:22:52.995 @path[10.0.0.3, 4421]: 22895 00:22:52.995 @path[10.0.0.3, 4421]: 22855 00:22:52.995 @path[10.0.0.3, 4421]: 22879 00:22:52.995 @path[10.0.0.3, 4421]: 22830 00:22:52.995 09:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:52.995 09:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:52.995 09:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:52.995 09:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:52.995 09:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:52.995 09:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:52.995 09:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81058 00:22:52.995 09:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:52.995 09:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:52.995 [2024-12-09 09:33:30.420757] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71b8d0 is same with the state(6) to be set 00:22:52.995 [2024-12-09 09:33:30.420812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71b8d0 is same with the state(6) to be set 00:22:52.995 [2024-12-09 09:33:30.420822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71b8d0 is same with the state(6) to be set 00:22:52.995 09:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:22:53.930 09:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:22:53.930 09:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81181 00:22:53.930 09:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:53.930 09:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80506 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:00.489 09:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:00.489 09:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:23:00.489 09:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:23:00.489 09:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:00.489 Attaching 4 probes... 00:23:00.489 @path[10.0.0.3, 4420]: 21720 00:23:00.489 @path[10.0.0.3, 4420]: 23015 00:23:00.489 @path[10.0.0.3, 4420]: 23044 00:23:00.489 @path[10.0.0.3, 4420]: 23041 00:23:00.489 @path[10.0.0.3, 4420]: 23071 00:23:00.489 09:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:00.489 09:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:23:00.489 09:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:23:00.489 09:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:23:00.489 09:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:23:00.489 09:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:23:00.489 09:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81181 00:23:00.489 09:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:00.489 09:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:23:00.489 [2024-12-09 09:33:37.866398] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:23:00.489 09:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:23:00.489 09:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:23:07.048 09:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:23:07.048 09:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81361 00:23:07.048 09:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:23:07.048 09:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80506 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:13.628 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:13.628 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:13.628 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:23:13.628 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:13.628 Attaching 4 probes... 00:23:13.628 @path[10.0.0.3, 4421]: 22550 00:23:13.628 @path[10.0.0.3, 4421]: 22853 00:23:13.628 @path[10.0.0.3, 4421]: 22882 00:23:13.628 @path[10.0.0.3, 4421]: 22897 00:23:13.628 @path[10.0.0.3, 4421]: 22931 00:23:13.628 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:13.628 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:23:13.628 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:23:13.628 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:23:13.628 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:13.628 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:13.628 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81361 00:23:13.628 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:13.628 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 80556 00:23:13.628 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 80556 ']' 00:23:13.628 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 80556 00:23:13.628 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:23:13.628 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:13.628 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80556 00:23:13.628 killing process with pid 80556 00:23:13.628 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:13.628 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:13.628 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80556' 00:23:13.628 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 80556 00:23:13.628 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 80556 00:23:13.628 { 00:23:13.628 "results": [ 00:23:13.628 { 00:23:13.628 "job": "Nvme0n1", 00:23:13.628 "core_mask": "0x4", 00:23:13.628 "workload": "verify", 00:23:13.628 "status": "terminated", 00:23:13.628 "verify_range": { 00:23:13.628 "start": 0, 00:23:13.628 "length": 16384 00:23:13.628 }, 00:23:13.628 "queue_depth": 128, 00:23:13.628 "io_size": 4096, 00:23:13.628 "runtime": 54.696104, 00:23:13.628 "iops": 9281.611721375986, 00:23:13.628 "mibps": 36.256295786624946, 00:23:13.628 "io_failed": 0, 00:23:13.628 "io_timeout": 0, 00:23:13.628 "avg_latency_us": 13774.932502407339, 00:23:13.628 "min_latency_us": 842.229718875502, 00:23:13.628 "max_latency_us": 7061253.963052209 00:23:13.628 } 00:23:13.628 ], 00:23:13.628 "core_count": 1 00:23:13.628 } 00:23:13.628 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 80556 00:23:13.628 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:13.628 [2024-12-09 09:32:53.975129] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:23:13.628 [2024-12-09 09:32:53.975208] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80556 ] 00:23:13.628 [2024-12-09 09:32:54.128474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.628 [2024-12-09 09:32:54.177362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:13.629 [2024-12-09 09:32:54.218534] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:13.629 Running I/O for 90 seconds... 00:23:13.629 9237.00 IOPS, 36.08 MiB/s [2024-12-09T09:33:51.352Z] 10302.50 IOPS, 40.24 MiB/s [2024-12-09T09:33:51.352Z] 10531.00 IOPS, 41.14 MiB/s [2024-12-09T09:33:51.352Z] 10644.25 IOPS, 41.58 MiB/s [2024-12-09T09:33:51.352Z] 10712.20 IOPS, 41.84 MiB/s [2024-12-09T09:33:51.352Z] 10748.17 IOPS, 41.99 MiB/s [2024-12-09T09:33:51.352Z] 10727.00 IOPS, 41.90 MiB/s [2024-12-09T09:33:51.352Z] 10479.62 IOPS, 40.94 MiB/s [2024-12-09T09:33:51.352Z] [2024-12-09 09:33:03.844431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.629 [2024-12-09 09:33:03.844504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:13.629 [2024-12-09 09:33:03.844554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.629 [2024-12-09 09:33:03.844571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:13.629 [2024-12-09 09:33:03.844592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.629 [2024-12-09 09:33:03.844607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:13.629 [2024-12-09 09:33:03.844628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:21088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.629 [2024-12-09 09:33:03.844642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:13.629 [2024-12-09 09:33:03.844662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.629 [2024-12-09 09:33:03.844676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:13.629 [2024-12-09 09:33:03.844696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.629 [2024-12-09 09:33:03.844711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:13.629 [2024-12-09 09:33:03.844731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.629 [2024-12-09 09:33:03.844746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:13.629 [2024-12-09 09:33:03.844766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.629 [2024-12-09 09:33:03.844780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:13.629 [2024-12-09 09:33:03.844800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.629 [2024-12-09 09:33:03.844819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:13.629 [2024-12-09 09:33:03.844841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.629 [2024-12-09 09:33:03.844877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:13.629 [2024-12-09 09:33:03.844897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.629 [2024-12-09 09:33:03.844911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:13.629 [2024-12-09 09:33:03.844932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.629 [2024-12-09 09:33:03.844946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:13.629 [2024-12-09 09:33:03.844966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.629 [2024-12-09 09:33:03.844981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:13.629 [2024-12-09 09:33:03.845001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.629 [2024-12-09 09:33:03.845016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.629 [2024-12-09 09:33:03.845036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.629 [2024-12-09 09:33:03.845050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:13.629 [2024-12-09 09:33:03.845070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.629 [2024-12-09 09:33:03.845086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:13.629 [2024-12-09 09:33:03.845120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.629 [2024-12-09 09:33:03.845136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:13.629 [2024-12-09 09:33:03.845156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.629 [2024-12-09 09:33:03.845170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:13.629 [2024-12-09 09:33:03.845190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.629 [2024-12-09 09:33:03.845205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:13.629 [2024-12-09 09:33:03.845225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.629 [2024-12-09 09:33:03.845240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:13.629 [2024-12-09 09:33:03.845260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.629 [2024-12-09 09:33:03.845274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:13.629 [2024-12-09 09:33:03.845294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.629 [2024-12-09 09:33:03.845314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:13.629 [2024-12-09 09:33:03.845335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.629 [2024-12-09 09:33:03.845350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:13.629 [2024-12-09 09:33:03.845369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.629 [2024-12-09 09:33:03.845384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:13.629 [2024-12-09 09:33:03.845404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:20808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.629 [2024-12-09 09:33:03.845418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:13.629 [2024-12-09 09:33:03.845438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.629 [2024-12-09 09:33:03.845453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:13.629 [2024-12-09 09:33:03.845485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.629 [2024-12-09 09:33:03.845500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:13.629 [2024-12-09 09:33:03.845520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.629 [2024-12-09 09:33:03.845534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:13.629 [2024-12-09 09:33:03.845554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.629 [2024-12-09 09:33:03.845568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:13.629 [2024-12-09 09:33:03.845589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.629 [2024-12-09 09:33:03.845605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:13.629 [2024-12-09 09:33:03.845632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.629 [2024-12-09 09:33:03.845656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:13.629 [2024-12-09 09:33:03.845677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.629 [2024-12-09 09:33:03.845693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:13.629 [2024-12-09 09:33:03.845717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.629 [2024-12-09 09:33:03.845731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:13.629 [2024-12-09 09:33:03.845751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.629 [2024-12-09 09:33:03.845766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:13.629 [2024-12-09 09:33:03.845792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.629 [2024-12-09 09:33:03.845807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:13.629 [2024-12-09 09:33:03.845828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.629 [2024-12-09 09:33:03.845842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:13.630 [2024-12-09 09:33:03.845862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.630 [2024-12-09 09:33:03.845877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:13.630 [2024-12-09 09:33:03.845897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.630 [2024-12-09 09:33:03.845911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:13.630 [2024-12-09 09:33:03.845931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.630 [2024-12-09 09:33:03.845945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:13.630 [2024-12-09 09:33:03.845965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:21248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.630 [2024-12-09 09:33:03.845980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:13.630 [2024-12-09 09:33:03.845999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.630 [2024-12-09 09:33:03.846014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:13.630 [2024-12-09 09:33:03.846034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.630 [2024-12-09 09:33:03.846059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:13.630 [2024-12-09 09:33:03.846080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:21272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.630 [2024-12-09 09:33:03.846094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:13.630 [2024-12-09 09:33:03.846114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.630 [2024-12-09 09:33:03.846129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:13.630 [2024-12-09 09:33:03.846149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.630 [2024-12-09 09:33:03.846163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:13.630 [2024-12-09 09:33:03.846183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.630 [2024-12-09 09:33:03.846197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.630 [2024-12-09 09:33:03.846222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.630 [2024-12-09 09:33:03.846237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:13.630 [2024-12-09 09:33:03.846257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.630 [2024-12-09 09:33:03.846272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:13.630 [2024-12-09 09:33:03.846292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.630 [2024-12-09 09:33:03.846307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:13.630 [2024-12-09 09:33:03.846327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.630 [2024-12-09 09:33:03.846341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:13.630 [2024-12-09 09:33:03.846361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.630 [2024-12-09 09:33:03.846376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:13.630 [2024-12-09 09:33:03.846396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.630 [2024-12-09 09:33:03.846410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:13.630 [2024-12-09 09:33:03.846430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.630 [2024-12-09 09:33:03.846444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:13.630 [2024-12-09 09:33:03.846482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.630 [2024-12-09 09:33:03.846498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:13.630 [2024-12-09 09:33:03.846519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.630 [2024-12-09 09:33:03.846533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:13.630 [2024-12-09 09:33:03.846553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.630 [2024-12-09 09:33:03.846568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:13.630 [2024-12-09 09:33:03.846601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.630 [2024-12-09 09:33:03.846616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:13.630 [2024-12-09 09:33:03.846636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.630 [2024-12-09 09:33:03.846651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:13.630 [2024-12-09 09:33:03.846671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.630 [2024-12-09 09:33:03.846691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:13.630 [2024-12-09 09:33:03.846712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.630 [2024-12-09 09:33:03.846726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:13.630 [2024-12-09 09:33:03.846746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.630 [2024-12-09 09:33:03.846760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:13.630 [2024-12-09 09:33:03.846780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.630 [2024-12-09 09:33:03.846795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:13.630 [2024-12-09 09:33:03.846815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.630 [2024-12-09 09:33:03.846829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:13.630 [2024-12-09 09:33:03.846849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.630 [2024-12-09 09:33:03.846879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:13.630 [2024-12-09 09:33:03.846902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.630 [2024-12-09 09:33:03.846917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:13.630 [2024-12-09 09:33:03.846938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.630 [2024-12-09 09:33:03.846952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:13.630 [2024-12-09 09:33:03.846973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:21400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.630 [2024-12-09 09:33:03.846987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:13.630 [2024-12-09 09:33:03.847007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.630 [2024-12-09 09:33:03.847022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:13.630 [2024-12-09 09:33:03.847042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.630 [2024-12-09 09:33:03.847056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:13.630 [2024-12-09 09:33:03.847076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.630 [2024-12-09 09:33:03.847090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:13.630 [2024-12-09 09:33:03.847110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.630 [2024-12-09 09:33:03.847129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:13.630 [2024-12-09 09:33:03.847156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.630 [2024-12-09 09:33:03.847171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:13.630 [2024-12-09 09:33:03.847194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.630 [2024-12-09 09:33:03.847208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:13.630 [2024-12-09 09:33:03.847228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.630 [2024-12-09 09:33:03.847243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:13.630 [2024-12-09 09:33:03.847264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.631 [2024-12-09 09:33:03.847278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:13.631 [2024-12-09 09:33:03.847298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.631 [2024-12-09 09:33:03.847313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:13.631 [2024-12-09 09:33:03.847333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.631 [2024-12-09 09:33:03.847347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.631 [2024-12-09 09:33:03.847368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.631 [2024-12-09 09:33:03.847382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.631 [2024-12-09 09:33:03.847402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.631 [2024-12-09 09:33:03.847417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:13.631 [2024-12-09 09:33:03.847437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.631 [2024-12-09 09:33:03.847452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:13.631 [2024-12-09 09:33:03.847481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.631 [2024-12-09 09:33:03.847496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:13.631 [2024-12-09 09:33:03.847517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.631 [2024-12-09 09:33:03.847532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:13.631 [2024-12-09 09:33:03.847552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.631 [2024-12-09 09:33:03.847566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:13.631 [2024-12-09 09:33:03.847592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:20992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.631 [2024-12-09 09:33:03.847606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:13.631 [2024-12-09 09:33:03.847627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.631 [2024-12-09 09:33:03.847641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:13.631 [2024-12-09 09:33:03.847661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.631 [2024-12-09 09:33:03.847676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:13.631 [2024-12-09 09:33:03.847695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.631 [2024-12-09 09:33:03.847710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:13.631 [2024-12-09 09:33:03.847730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:21504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.631 [2024-12-09 09:33:03.847745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:13.631 [2024-12-09 09:33:03.847776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.631 [2024-12-09 09:33:03.847792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:13.631 [2024-12-09 09:33:03.847812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.631 [2024-12-09 09:33:03.847826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:13.631 [2024-12-09 09:33:03.847846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:21528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.631 [2024-12-09 09:33:03.847861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:13.631 [2024-12-09 09:33:03.847881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.631 [2024-12-09 09:33:03.847896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:13.631 [2024-12-09 09:33:03.847916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.631 [2024-12-09 09:33:03.847931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:13.631 [2024-12-09 09:33:03.847951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.631 [2024-12-09 09:33:03.847966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:13.631 [2024-12-09 09:33:03.847986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.631 [2024-12-09 09:33:03.848000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:13.631 [2024-12-09 09:33:03.848025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.631 [2024-12-09 09:33:03.848044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:13.631 [2024-12-09 09:33:03.848066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.631 [2024-12-09 09:33:03.848084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:13.631 [2024-12-09 09:33:03.848105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.631 [2024-12-09 09:33:03.848120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:13.631 [2024-12-09 09:33:03.848140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:21592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.631 [2024-12-09 09:33:03.848154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:13.631 [2024-12-09 09:33:03.848174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.631 [2024-12-09 09:33:03.848189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:13.631 [2024-12-09 09:33:03.848209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.631 [2024-12-09 09:33:03.848224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:13.631 [2024-12-09 09:33:03.848244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.631 [2024-12-09 09:33:03.848259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:13.631 [2024-12-09 09:33:03.848279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.631 [2024-12-09 09:33:03.848293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:13.631 [2024-12-09 09:33:03.848313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.631 [2024-12-09 09:33:03.848327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:13.631 [2024-12-09 09:33:03.848347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.631 [2024-12-09 09:33:03.848362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:13.631 [2024-12-09 09:33:03.848382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.631 [2024-12-09 09:33:03.848396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:13.631 [2024-12-09 09:33:03.848416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.631 [2024-12-09 09:33:03.848431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:13.631 [2024-12-09 09:33:03.848451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.631 [2024-12-09 09:33:03.848481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:13.631 [2024-12-09 09:33:03.848501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.631 [2024-12-09 09:33:03.848516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:13.631 [2024-12-09 09:33:03.850026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.631 [2024-12-09 09:33:03.850071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.631 [2024-12-09 09:33:03.850098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.631 [2024-12-09 09:33:03.850114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:13.631 [2024-12-09 09:33:03.850135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.631 [2024-12-09 09:33:03.850150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:13.631 [2024-12-09 09:33:03.850171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.631 [2024-12-09 09:33:03.850185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:13.631 [2024-12-09 09:33:03.850206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.632 [2024-12-09 09:33:03.850220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:13.632 [2024-12-09 09:33:03.850240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.632 [2024-12-09 09:33:03.850255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:13.632 [2024-12-09 09:33:03.850275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.632 [2024-12-09 09:33:03.850290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:13.632 [2024-12-09 09:33:03.850310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.632 [2024-12-09 09:33:03.850324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:13.632 [2024-12-09 09:33:03.850780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.632 [2024-12-09 09:33:03.850803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:13.632 [2024-12-09 09:33:03.850828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.632 [2024-12-09 09:33:03.850842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:13.632 [2024-12-09 09:33:03.850862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:21696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.632 [2024-12-09 09:33:03.850889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:13.632 [2024-12-09 09:33:03.850910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.632 [2024-12-09 09:33:03.850925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:13.632 [2024-12-09 09:33:03.850945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.632 [2024-12-09 09:33:03.850960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:13.632 [2024-12-09 09:33:03.850980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.632 [2024-12-09 09:33:03.850994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:13.632 [2024-12-09 09:33:03.851014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.632 [2024-12-09 09:33:03.851029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:13.632 [2024-12-09 09:33:03.851049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.632 [2024-12-09 09:33:03.851064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:13.632 [2024-12-09 09:33:03.851367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.632 [2024-12-09 09:33:03.851388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:13.632 [2024-12-09 09:33:03.851418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.632 [2024-12-09 09:33:03.851437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:13.632 [2024-12-09 09:33:03.851471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:21760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.632 [2024-12-09 09:33:03.851489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:13.632 10325.11 IOPS, 40.33 MiB/s [2024-12-09T09:33:51.355Z] 10223.40 IOPS, 39.94 MiB/s [2024-12-09T09:33:51.355Z] 10187.82 IOPS, 39.80 MiB/s [2024-12-09T09:33:51.355Z] 10235.50 IOPS, 39.98 MiB/s [2024-12-09T09:33:51.355Z] 10216.15 IOPS, 39.91 MiB/s [2024-12-09T09:33:51.355Z] 10183.57 IOPS, 39.78 MiB/s [2024-12-09T09:33:51.355Z] [2024-12-09 09:33:10.345100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.632 [2024-12-09 09:33:10.345172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:13.632 [2024-12-09 09:33:10.345203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.632 [2024-12-09 09:33:10.345218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:13.632 [2024-12-09 09:33:10.345239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.632 [2024-12-09 09:33:10.345253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:13.632 [2024-12-09 09:33:10.345273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.632 [2024-12-09 09:33:10.345311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:13.632 [2024-12-09 09:33:10.345331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.632 [2024-12-09 09:33:10.345345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.632 [2024-12-09 09:33:10.345365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.632 [2024-12-09 09:33:10.345379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.632 [2024-12-09 09:33:10.345399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.632 [2024-12-09 09:33:10.345412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:13.632 [2024-12-09 09:33:10.345432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.632 [2024-12-09 09:33:10.345446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:13.632 [2024-12-09 09:33:10.345682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.632 [2024-12-09 09:33:10.345702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:13.632 [2024-12-09 09:33:10.345724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.632 [2024-12-09 09:33:10.345739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:13.632 [2024-12-09 09:33:10.345759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.632 [2024-12-09 09:33:10.345773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:13.632 [2024-12-09 09:33:10.345792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.632 [2024-12-09 09:33:10.345811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:13.632 [2024-12-09 09:33:10.345841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.632 [2024-12-09 09:33:10.345858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:13.632 [2024-12-09 09:33:10.345881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.632 [2024-12-09 09:33:10.345898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:13.632 [2024-12-09 09:33:10.345921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.632 [2024-12-09 09:33:10.345937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:13.632 [2024-12-09 09:33:10.345960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.632 [2024-12-09 09:33:10.345986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:13.632 [2024-12-09 09:33:10.346009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.632 [2024-12-09 09:33:10.346027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:13.632 [2024-12-09 09:33:10.346063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.632 [2024-12-09 09:33:10.346082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:13.632 [2024-12-09 09:33:10.346105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.632 [2024-12-09 09:33:10.346122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:13.632 [2024-12-09 09:33:10.346145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.632 [2024-12-09 09:33:10.346162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:13.632 [2024-12-09 09:33:10.346184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.632 [2024-12-09 09:33:10.346201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:13.632 [2024-12-09 09:33:10.346224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.632 [2024-12-09 09:33:10.346241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:13.632 [2024-12-09 09:33:10.346264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.633 [2024-12-09 09:33:10.346281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:13.633 [2024-12-09 09:33:10.346303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.633 [2024-12-09 09:33:10.346320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:13.633 [2024-12-09 09:33:10.346552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.633 [2024-12-09 09:33:10.346576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:13.633 [2024-12-09 09:33:10.346604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.633 [2024-12-09 09:33:10.346621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:13.633 [2024-12-09 09:33:10.346644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.633 [2024-12-09 09:33:10.346664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:13.633 [2024-12-09 09:33:10.346692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.633 [2024-12-09 09:33:10.346710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:13.633 [2024-12-09 09:33:10.346749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.633 [2024-12-09 09:33:10.346763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:13.633 [2024-12-09 09:33:10.346784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.633 [2024-12-09 09:33:10.346798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:13.633 [2024-12-09 09:33:10.346818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.633 [2024-12-09 09:33:10.346832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:13.633 [2024-12-09 09:33:10.346852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.633 [2024-12-09 09:33:10.346866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:13.633 [2024-12-09 09:33:10.346886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.633 [2024-12-09 09:33:10.346900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:13.633 [2024-12-09 09:33:10.346921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.633 [2024-12-09 09:33:10.346936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:13.633 [2024-12-09 09:33:10.346956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.633 [2024-12-09 09:33:10.346969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:13.633 [2024-12-09 09:33:10.346990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.633 [2024-12-09 09:33:10.347004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:13.633 [2024-12-09 09:33:10.347023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.633 [2024-12-09 09:33:10.347037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:13.633 [2024-12-09 09:33:10.347057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.633 [2024-12-09 09:33:10.347071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.633 [2024-12-09 09:33:10.347091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.633 [2024-12-09 09:33:10.347105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:13.633 [2024-12-09 09:33:10.347125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:6680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.633 [2024-12-09 09:33:10.347139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:13.633 [2024-12-09 09:33:10.347164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:6688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.633 [2024-12-09 09:33:10.347178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:13.633 [2024-12-09 09:33:10.347198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.633 [2024-12-09 09:33:10.347212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:13.633 [2024-12-09 09:33:10.347232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.633 [2024-12-09 09:33:10.347247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:13.633 [2024-12-09 09:33:10.347266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.633 [2024-12-09 09:33:10.347280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:13.633 [2024-12-09 09:33:10.347300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.633 [2024-12-09 09:33:10.347314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:13.633 [2024-12-09 09:33:10.347334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.633 [2024-12-09 09:33:10.347348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:13.633 [2024-12-09 09:33:10.347368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.633 [2024-12-09 09:33:10.347383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:13.633 [2024-12-09 09:33:10.347403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.633 [2024-12-09 09:33:10.347417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:13.633 [2024-12-09 09:33:10.347452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.633 [2024-12-09 09:33:10.347490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:13.633 [2024-12-09 09:33:10.347521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.633 [2024-12-09 09:33:10.347536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:13.633 [2024-12-09 09:33:10.347556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.633 [2024-12-09 09:33:10.347571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:13.633 [2024-12-09 09:33:10.347591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.633 [2024-12-09 09:33:10.347605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:13.633 [2024-12-09 09:33:10.347625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.633 [2024-12-09 09:33:10.347646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:13.633 [2024-12-09 09:33:10.347666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.633 [2024-12-09 09:33:10.347680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:13.633 [2024-12-09 09:33:10.347700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.633 [2024-12-09 09:33:10.347714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:13.633 [2024-12-09 09:33:10.347734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.633 [2024-12-09 09:33:10.347748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:13.634 [2024-12-09 09:33:10.347768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.634 [2024-12-09 09:33:10.347782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:13.634 [2024-12-09 09:33:10.347801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.634 [2024-12-09 09:33:10.347815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:13.634 [2024-12-09 09:33:10.347835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.634 [2024-12-09 09:33:10.347849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:13.634 [2024-12-09 09:33:10.347869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.634 [2024-12-09 09:33:10.347883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:13.634 [2024-12-09 09:33:10.347903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.634 [2024-12-09 09:33:10.347917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:13.634 [2024-12-09 09:33:10.347937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.634 [2024-12-09 09:33:10.347951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:13.634 [2024-12-09 09:33:10.347971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.634 [2024-12-09 09:33:10.347986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:13.634 [2024-12-09 09:33:10.348006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.634 [2024-12-09 09:33:10.348020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:13.634 [2024-12-09 09:33:10.348040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.634 [2024-12-09 09:33:10.348059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:13.634 [2024-12-09 09:33:10.348080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.634 [2024-12-09 09:33:10.348094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:13.634 [2024-12-09 09:33:10.348114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.634 [2024-12-09 09:33:10.348128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:13.634 [2024-12-09 09:33:10.348148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.634 [2024-12-09 09:33:10.348162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:13.634 [2024-12-09 09:33:10.348182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.634 [2024-12-09 09:33:10.348196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:13.634 [2024-12-09 09:33:10.348216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.634 [2024-12-09 09:33:10.348230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.634 [2024-12-09 09:33:10.348255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:6784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.634 [2024-12-09 09:33:10.348273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:13.634 [2024-12-09 09:33:10.348296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.634 [2024-12-09 09:33:10.348314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:13.634 [2024-12-09 09:33:10.348340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.634 [2024-12-09 09:33:10.348357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:13.634 [2024-12-09 09:33:10.348380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.634 [2024-12-09 09:33:10.348397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:13.634 [2024-12-09 09:33:10.348420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.634 [2024-12-09 09:33:10.348437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:13.634 [2024-12-09 09:33:10.348472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.634 [2024-12-09 09:33:10.348490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:13.634 [2024-12-09 09:33:10.348512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.634 [2024-12-09 09:33:10.348529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:13.634 [2024-12-09 09:33:10.348558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.634 [2024-12-09 09:33:10.348575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:13.634 [2024-12-09 09:33:10.348598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.634 [2024-12-09 09:33:10.348616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:13.634 [2024-12-09 09:33:10.348639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.634 [2024-12-09 09:33:10.348656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:13.634 [2024-12-09 09:33:10.348678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:6800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.634 [2024-12-09 09:33:10.348696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:13.634 [2024-12-09 09:33:10.348718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.634 [2024-12-09 09:33:10.348735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:13.634 [2024-12-09 09:33:10.348758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.634 [2024-12-09 09:33:10.348775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:13.634 [2024-12-09 09:33:10.348798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.634 [2024-12-09 09:33:10.348814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:13.634 [2024-12-09 09:33:10.348837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.634 [2024-12-09 09:33:10.348854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:13.634 [2024-12-09 09:33:10.348877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.634 [2024-12-09 09:33:10.348894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:13.634 [2024-12-09 09:33:10.348916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.634 [2024-12-09 09:33:10.348934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:13.634 [2024-12-09 09:33:10.348957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.634 [2024-12-09 09:33:10.348974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:13.634 [2024-12-09 09:33:10.349009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.634 [2024-12-09 09:33:10.349027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:13.634 [2024-12-09 09:33:10.349050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:7512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.634 [2024-12-09 09:33:10.349073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:13.634 [2024-12-09 09:33:10.349102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.634 [2024-12-09 09:33:10.349121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:13.634 [2024-12-09 09:33:10.349141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.634 [2024-12-09 09:33:10.349155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:13.634 [2024-12-09 09:33:10.349175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.634 [2024-12-09 09:33:10.349190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:13.634 [2024-12-09 09:33:10.349210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.634 [2024-12-09 09:33:10.349224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:13.634 [2024-12-09 09:33:10.349244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.635 [2024-12-09 09:33:10.349259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:13.635 [2024-12-09 09:33:10.349285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.635 [2024-12-09 09:33:10.349299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:13.635 [2024-12-09 09:33:10.349319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.635 [2024-12-09 09:33:10.349333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:13.635 [2024-12-09 09:33:10.349353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.635 [2024-12-09 09:33:10.349368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:13.635 [2024-12-09 09:33:10.349387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.635 [2024-12-09 09:33:10.349401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:13.635 [2024-12-09 09:33:10.349421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.635 [2024-12-09 09:33:10.349435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:13.635 [2024-12-09 09:33:10.349455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.635 [2024-12-09 09:33:10.349480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:13.635 [2024-12-09 09:33:10.349500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.635 [2024-12-09 09:33:10.349520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.635 [2024-12-09 09:33:10.349540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.635 [2024-12-09 09:33:10.349554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:13.635 [2024-12-09 09:33:10.349575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:6920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.635 [2024-12-09 09:33:10.349589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:13.635 [2024-12-09 09:33:10.349609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.635 [2024-12-09 09:33:10.349623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:13.635 [2024-12-09 09:33:10.349643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.635 [2024-12-09 09:33:10.349657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:13.635 [2024-12-09 09:33:10.349676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.635 [2024-12-09 09:33:10.349690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:13.635 [2024-12-09 09:33:10.349710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.635 [2024-12-09 09:33:10.349724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:13.635 [2024-12-09 09:33:10.349744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.635 [2024-12-09 09:33:10.349758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:13.635 [2024-12-09 09:33:10.349778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.635 [2024-12-09 09:33:10.349792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:13.635 [2024-12-09 09:33:10.349818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.635 [2024-12-09 09:33:10.349847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:13.635 [2024-12-09 09:33:10.349875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:6984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.635 [2024-12-09 09:33:10.349893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:13.635 [2024-12-09 09:33:10.349931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.635 [2024-12-09 09:33:10.349950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:13.635 [2024-12-09 09:33:10.349973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.635 [2024-12-09 09:33:10.349990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:13.635 [2024-12-09 09:33:10.350019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.635 [2024-12-09 09:33:10.350036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:13.635 [2024-12-09 09:33:10.350070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.635 [2024-12-09 09:33:10.350084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:13.635 [2024-12-09 09:33:10.350104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.635 [2024-12-09 09:33:10.350119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:13.635 [2024-12-09 09:33:10.350138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.635 [2024-12-09 09:33:10.350152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:13.635 [2024-12-09 09:33:10.350172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.635 [2024-12-09 09:33:10.350186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:13.635 [2024-12-09 09:33:10.350206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.635 [2024-12-09 09:33:10.350220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:13.635 [2024-12-09 09:33:10.350241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.635 [2024-12-09 09:33:10.350255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:13.635 [2024-12-09 09:33:10.350275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:7000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.635 [2024-12-09 09:33:10.350289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:13.635 [2024-12-09 09:33:10.350309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.635 [2024-12-09 09:33:10.350323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:13.635 [2024-12-09 09:33:10.350343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:7016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.635 [2024-12-09 09:33:10.350357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:13.635 [2024-12-09 09:33:10.350377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.635 [2024-12-09 09:33:10.350391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:13.635 [2024-12-09 09:33:10.350411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.635 [2024-12-09 09:33:10.350425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:13.635 [2024-12-09 09:33:10.350451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.635 [2024-12-09 09:33:10.350479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:13.635 [2024-12-09 09:33:10.350501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:7048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.635 [2024-12-09 09:33:10.350515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:13.635 [2024-12-09 09:33:10.350535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.635 [2024-12-09 09:33:10.350549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:13.635 [2024-12-09 09:33:10.350569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.635 [2024-12-09 09:33:10.350583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:13.635 [2024-12-09 09:33:10.350603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.635 [2024-12-09 09:33:10.350617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:13.635 [2024-12-09 09:33:10.350637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.635 [2024-12-09 09:33:10.350653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:13.635 [2024-12-09 09:33:10.350680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.635 [2024-12-09 09:33:10.350703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.636 [2024-12-09 09:33:10.350724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.636 [2024-12-09 09:33:10.350739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.636 [2024-12-09 09:33:10.350759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.636 [2024-12-09 09:33:10.350773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:13.636 [2024-12-09 09:33:10.350793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.636 [2024-12-09 09:33:10.350807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:13.636 [2024-12-09 09:33:10.350831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.636 [2024-12-09 09:33:10.350846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:13.636 [2024-12-09 09:33:10.350866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.636 [2024-12-09 09:33:10.350880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:13.636 [2024-12-09 09:33:10.350900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.636 [2024-12-09 09:33:10.350919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:13.636 [2024-12-09 09:33:10.350940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.636 [2024-12-09 09:33:10.350954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:13.636 [2024-12-09 09:33:10.350974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.636 [2024-12-09 09:33:10.350988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:13.636 [2024-12-09 09:33:10.351008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.636 [2024-12-09 09:33:10.351022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:13.636 [2024-12-09 09:33:10.351042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.636 [2024-12-09 09:33:10.351058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:13.636 [2024-12-09 09:33:10.351080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.636 [2024-12-09 09:33:10.351094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:13.636 [2024-12-09 09:33:10.351114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.636 [2024-12-09 09:33:10.351128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:13.636 [2024-12-09 09:33:10.351148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.636 [2024-12-09 09:33:10.351162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:13.636 [2024-12-09 09:33:10.351182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.636 [2024-12-09 09:33:10.351196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:13.636 [2024-12-09 09:33:10.351216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.636 [2024-12-09 09:33:10.351230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:13.636 [2024-12-09 09:33:10.351250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.636 [2024-12-09 09:33:10.351264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:13.636 [2024-12-09 09:33:10.351283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.636 [2024-12-09 09:33:10.351298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:13.636 [2024-12-09 09:33:10.351317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.636 [2024-12-09 09:33:10.351336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:13.636 [2024-12-09 09:33:10.352969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.636 [2024-12-09 09:33:10.353001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:13.636 [2024-12-09 09:33:10.353027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.636 [2024-12-09 09:33:10.353042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:13.636 [2024-12-09 09:33:10.353062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.636 [2024-12-09 09:33:10.353078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:13.636 [2024-12-09 09:33:10.353107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.636 [2024-12-09 09:33:10.353122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:13.636 [2024-12-09 09:33:10.353142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.636 [2024-12-09 09:33:10.353156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:13.636 [2024-12-09 09:33:10.353176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.636 [2024-12-09 09:33:10.353191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:13.636 [2024-12-09 09:33:10.353211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.636 [2024-12-09 09:33:10.353225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:13.636 [2024-12-09 09:33:10.353245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.636 [2024-12-09 09:33:10.353261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:13.636 [2024-12-09 09:33:10.353283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.636 [2024-12-09 09:33:10.353297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:13.636 [2024-12-09 09:33:10.353317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.636 [2024-12-09 09:33:10.353331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:13.636 [2024-12-09 09:33:10.353351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.636 [2024-12-09 09:33:10.353365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:13.636 [2024-12-09 09:33:10.353385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.636 [2024-12-09 09:33:10.353399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:13.636 [2024-12-09 09:33:10.353428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.636 [2024-12-09 09:33:10.353443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:13.636 [2024-12-09 09:33:10.353475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.636 [2024-12-09 09:33:10.353490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:13.636 [2024-12-09 09:33:10.353510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.636 [2024-12-09 09:33:10.370315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.636 [2024-12-09 09:33:10.370366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.636 [2024-12-09 09:33:10.370387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:13.636 [2024-12-09 09:33:10.370414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.636 [2024-12-09 09:33:10.370433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:13.636 [2024-12-09 09:33:10.370474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.636 [2024-12-09 09:33:10.370501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:13.636 [2024-12-09 09:33:10.370530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.636 [2024-12-09 09:33:10.370549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:13.636 [2024-12-09 09:33:10.370576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.636 [2024-12-09 09:33:10.370595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:13.636 [2024-12-09 09:33:10.370622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.637 [2024-12-09 09:33:10.370642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:13.637 [2024-12-09 09:33:10.370669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.637 [2024-12-09 09:33:10.370688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:13.637 [2024-12-09 09:33:10.370715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.637 [2024-12-09 09:33:10.370734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:13.637 [2024-12-09 09:33:10.370770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:7296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.637 [2024-12-09 09:33:10.370791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:13.637 [2024-12-09 09:33:10.370835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.637 [2024-12-09 09:33:10.370855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:13.637 [2024-12-09 09:33:10.370882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.637 [2024-12-09 09:33:10.370901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:13.637 [2024-12-09 09:33:10.370928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.637 [2024-12-09 09:33:10.370950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:13.637 [2024-12-09 09:33:10.370984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.637 [2024-12-09 09:33:10.371003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:13.637 [2024-12-09 09:33:10.371030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.637 [2024-12-09 09:33:10.371049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:13.637 [2024-12-09 09:33:10.371076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:7344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.637 [2024-12-09 09:33:10.371095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:13.637 [2024-12-09 09:33:10.371122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.637 [2024-12-09 09:33:10.371141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:13.637 [2024-12-09 09:33:10.371168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.637 [2024-12-09 09:33:10.371187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:13.637 [2024-12-09 09:33:10.371213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.637 [2024-12-09 09:33:10.371232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:13.637 [2024-12-09 09:33:10.371259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.637 [2024-12-09 09:33:10.371278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:13.637 [2024-12-09 09:33:10.371305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.637 [2024-12-09 09:33:10.371324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:13.637 [2024-12-09 09:33:10.371359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.637 [2024-12-09 09:33:10.371387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:13.637 [2024-12-09 09:33:10.371430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.637 [2024-12-09 09:33:10.371472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:13.637 [2024-12-09 09:33:10.371499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.637 [2024-12-09 09:33:10.371519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:13.637 [2024-12-09 09:33:10.371545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.637 [2024-12-09 09:33:10.371571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:13.637 [2024-12-09 09:33:10.371598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.637 [2024-12-09 09:33:10.371618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:13.637 [2024-12-09 09:33:10.371645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.637 [2024-12-09 09:33:10.371664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:13.637 [2024-12-09 09:33:10.371701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.637 [2024-12-09 09:33:10.371727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:13.637 [2024-12-09 09:33:10.371754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.637 [2024-12-09 09:33:10.371773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:13.637 [2024-12-09 09:33:10.371800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.637 [2024-12-09 09:33:10.371819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:13.637 [2024-12-09 09:33:10.371846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.637 [2024-12-09 09:33:10.371865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:13.637 [2024-12-09 09:33:10.371891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.637 [2024-12-09 09:33:10.371910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:13.637 [2024-12-09 09:33:10.371937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.637 [2024-12-09 09:33:10.371956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.637 [2024-12-09 09:33:10.371982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:6784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.637 [2024-12-09 09:33:10.372001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:13.637 [2024-12-09 09:33:10.372028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.637 [2024-12-09 09:33:10.372054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:13.637 [2024-12-09 09:33:10.372081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.637 [2024-12-09 09:33:10.372100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:13.637 [2024-12-09 09:33:10.372126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.637 [2024-12-09 09:33:10.372146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:13.637 [2024-12-09 09:33:10.372180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.637 [2024-12-09 09:33:10.372200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:13.637 [2024-12-09 09:33:10.372227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.637 [2024-12-09 09:33:10.372246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:13.637 [2024-12-09 09:33:10.372272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.637 [2024-12-09 09:33:10.372291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:13.637 [2024-12-09 09:33:10.372318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.637 [2024-12-09 09:33:10.372337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:13.638 [2024-12-09 09:33:10.372364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.638 [2024-12-09 09:33:10.372383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:13.638 [2024-12-09 09:33:10.372410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.638 [2024-12-09 09:33:10.372429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:13.638 [2024-12-09 09:33:10.372473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.638 [2024-12-09 09:33:10.372495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:13.638 [2024-12-09 09:33:10.372521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.638 [2024-12-09 09:33:10.372540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:13.638 [2024-12-09 09:33:10.372567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.638 [2024-12-09 09:33:10.372586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:13.638 [2024-12-09 09:33:10.372613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.638 [2024-12-09 09:33:10.372632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:13.638 [2024-12-09 09:33:10.372666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.638 [2024-12-09 09:33:10.372686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:13.638 [2024-12-09 09:33:10.372713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.638 [2024-12-09 09:33:10.372732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:13.638 [2024-12-09 09:33:10.372759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.638 [2024-12-09 09:33:10.372778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:13.638 [2024-12-09 09:33:10.372804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.638 [2024-12-09 09:33:10.372823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:13.638 [2024-12-09 09:33:10.372850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.638 [2024-12-09 09:33:10.372869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:13.638 [2024-12-09 09:33:10.372895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.638 [2024-12-09 09:33:10.372914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:13.638 [2024-12-09 09:33:10.372941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.638 [2024-12-09 09:33:10.372961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:13.638 [2024-12-09 09:33:10.372991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.638 [2024-12-09 09:33:10.373011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:13.638 [2024-12-09 09:33:10.373038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.638 [2024-12-09 09:33:10.373056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:13.638 [2024-12-09 09:33:10.373083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.638 [2024-12-09 09:33:10.373102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:13.638 [2024-12-09 09:33:10.373128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.638 [2024-12-09 09:33:10.373148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:13.638 [2024-12-09 09:33:10.373174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.638 [2024-12-09 09:33:10.373193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:13.638 [2024-12-09 09:33:10.373220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.638 [2024-12-09 09:33:10.373245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:13.638 [2024-12-09 09:33:10.373272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.638 [2024-12-09 09:33:10.373291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:13.638 [2024-12-09 09:33:10.373317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.638 [2024-12-09 09:33:10.373344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:13.638 [2024-12-09 09:33:10.373380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.638 [2024-12-09 09:33:10.373399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:13.638 [2024-12-09 09:33:10.373426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.638 [2024-12-09 09:33:10.373444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:13.638 [2024-12-09 09:33:10.373484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:6904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.638 [2024-12-09 09:33:10.373504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.638 [2024-12-09 09:33:10.373530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.638 [2024-12-09 09:33:10.373549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:13.638 [2024-12-09 09:33:10.373576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:6920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.638 [2024-12-09 09:33:10.373595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:13.638 [2024-12-09 09:33:10.373622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.638 [2024-12-09 09:33:10.373641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:13.638 [2024-12-09 09:33:10.373668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.638 [2024-12-09 09:33:10.373687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:13.638 [2024-12-09 09:33:10.373723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.638 [2024-12-09 09:33:10.373749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:13.638 [2024-12-09 09:33:10.373781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.638 [2024-12-09 09:33:10.373804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:13.638 [2024-12-09 09:33:10.373835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.638 [2024-12-09 09:33:10.373866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:13.638 [2024-12-09 09:33:10.373897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.638 [2024-12-09 09:33:10.373920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:13.638 [2024-12-09 09:33:10.373951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.638 [2024-12-09 09:33:10.373974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:13.638 [2024-12-09 09:33:10.374004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.638 [2024-12-09 09:33:10.374027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:13.638 [2024-12-09 09:33:10.374076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.638 [2024-12-09 09:33:10.374099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:13.638 [2024-12-09 09:33:10.374133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.638 [2024-12-09 09:33:10.374152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:13.638 [2024-12-09 09:33:10.374179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.638 [2024-12-09 09:33:10.374198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:13.638 [2024-12-09 09:33:10.374224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.638 [2024-12-09 09:33:10.374243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:13.639 [2024-12-09 09:33:10.374270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.639 [2024-12-09 09:33:10.374289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:13.639 [2024-12-09 09:33:10.374316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.639 [2024-12-09 09:33:10.374335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:13.639 [2024-12-09 09:33:10.374361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.639 [2024-12-09 09:33:10.374380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:13.639 [2024-12-09 09:33:10.374407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.639 [2024-12-09 09:33:10.374426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:13.639 [2024-12-09 09:33:10.374453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.639 [2024-12-09 09:33:10.374485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:13.639 [2024-12-09 09:33:10.374525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.639 [2024-12-09 09:33:10.374546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:13.639 [2024-12-09 09:33:10.374573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.639 [2024-12-09 09:33:10.374592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:13.639 [2024-12-09 09:33:10.374619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.639 [2024-12-09 09:33:10.374638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:13.639 [2024-12-09 09:33:10.374665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.639 [2024-12-09 09:33:10.374684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:13.639 [2024-12-09 09:33:10.374711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.639 [2024-12-09 09:33:10.374730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:13.639 [2024-12-09 09:33:10.374756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.639 [2024-12-09 09:33:10.374775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:13.639 [2024-12-09 09:33:10.374805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.639 [2024-12-09 09:33:10.374835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:13.639 [2024-12-09 09:33:10.374864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.639 [2024-12-09 09:33:10.374884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:13.639 [2024-12-09 09:33:10.374910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.639 [2024-12-09 09:33:10.374929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:13.639 [2024-12-09 09:33:10.374956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.639 [2024-12-09 09:33:10.374975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:13.639 [2024-12-09 09:33:10.375001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.639 [2024-12-09 09:33:10.375020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:13.639 [2024-12-09 09:33:10.375047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.639 [2024-12-09 09:33:10.375066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.639 [2024-12-09 09:33:10.375101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.639 [2024-12-09 09:33:10.375120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.639 [2024-12-09 09:33:10.375147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.639 [2024-12-09 09:33:10.375166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:13.639 [2024-12-09 09:33:10.375193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.639 [2024-12-09 09:33:10.375212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:13.639 [2024-12-09 09:33:10.375243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.639 [2024-12-09 09:33:10.375263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:13.639 [2024-12-09 09:33:10.375289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.639 [2024-12-09 09:33:10.375309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:13.639 [2024-12-09 09:33:10.375335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.639 [2024-12-09 09:33:10.375354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:13.639 [2024-12-09 09:33:10.375381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.639 [2024-12-09 09:33:10.375400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:13.639 [2024-12-09 09:33:10.375427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.639 [2024-12-09 09:33:10.375447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:13.639 [2024-12-09 09:33:10.375485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.639 [2024-12-09 09:33:10.375505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:13.639 [2024-12-09 09:33:10.375537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.639 [2024-12-09 09:33:10.375557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:13.639 [2024-12-09 09:33:10.375584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.639 [2024-12-09 09:33:10.375603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:13.639 [2024-12-09 09:33:10.375630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.639 [2024-12-09 09:33:10.375649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:13.639 [2024-12-09 09:33:10.375679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.639 [2024-12-09 09:33:10.375705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:13.639 [2024-12-09 09:33:10.375732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.639 [2024-12-09 09:33:10.375751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:13.639 [2024-12-09 09:33:10.375778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:6664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.639 [2024-12-09 09:33:10.375797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:13.639 [2024-12-09 09:33:10.375825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.639 [2024-12-09 09:33:10.375844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:13.639 [2024-12-09 09:33:10.375870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.639 [2024-12-09 09:33:10.375890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:13.639 [2024-12-09 09:33:10.378199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.639 [2024-12-09 09:33:10.378242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:13.639 [2024-12-09 09:33:10.378277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.639 [2024-12-09 09:33:10.378297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:13.639 [2024-12-09 09:33:10.378325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.639 [2024-12-09 09:33:10.378344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:13.639 [2024-12-09 09:33:10.378371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.639 [2024-12-09 09:33:10.378390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:13.639 [2024-12-09 09:33:10.378417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.640 [2024-12-09 09:33:10.378436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:13.640 [2024-12-09 09:33:10.378477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.640 [2024-12-09 09:33:10.378498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:13.640 [2024-12-09 09:33:10.378532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.640 [2024-12-09 09:33:10.378552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:13.640 [2024-12-09 09:33:10.378579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.640 [2024-12-09 09:33:10.378610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:13.640 [2024-12-09 09:33:10.378651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.640 [2024-12-09 09:33:10.378670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:13.640 [2024-12-09 09:33:10.378695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.640 [2024-12-09 09:33:10.378713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:13.640 [2024-12-09 09:33:10.378739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.640 [2024-12-09 09:33:10.378757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:13.640 [2024-12-09 09:33:10.378783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.640 [2024-12-09 09:33:10.378802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:13.640 [2024-12-09 09:33:10.378827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.640 [2024-12-09 09:33:10.378845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:13.640 [2024-12-09 09:33:10.378871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.640 [2024-12-09 09:33:10.378889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:13.640 [2024-12-09 09:33:10.378915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.640 [2024-12-09 09:33:10.378933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:13.640 [2024-12-09 09:33:10.378962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.640 [2024-12-09 09:33:10.378984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.640 [2024-12-09 09:33:10.379010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:6672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.640 [2024-12-09 09:33:10.379028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:13.640 [2024-12-09 09:33:10.379054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.640 [2024-12-09 09:33:10.379073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:13.640 [2024-12-09 09:33:10.379098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.640 [2024-12-09 09:33:10.379117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:13.640 [2024-12-09 09:33:10.379142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.640 [2024-12-09 09:33:10.379160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:13.640 [2024-12-09 09:33:10.379193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.640 [2024-12-09 09:33:10.379212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:13.640 [2024-12-09 09:33:10.379241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.640 [2024-12-09 09:33:10.379269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:13.640 [2024-12-09 09:33:10.379299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.640 [2024-12-09 09:33:10.379317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:13.640 [2024-12-09 09:33:10.379343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.640 [2024-12-09 09:33:10.379361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:13.640 [2024-12-09 09:33:10.379391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.640 [2024-12-09 09:33:10.379413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:13.640 [2024-12-09 09:33:10.379439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.640 [2024-12-09 09:33:10.379457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:13.640 [2024-12-09 09:33:10.379494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.640 [2024-12-09 09:33:10.379513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:13.640 [2024-12-09 09:33:10.379539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.640 [2024-12-09 09:33:10.379557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:13.640 [2024-12-09 09:33:10.379585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.640 [2024-12-09 09:33:10.379609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:13.640 [2024-12-09 09:33:10.379635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.640 [2024-12-09 09:33:10.379653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:13.640 [2024-12-09 09:33:10.379679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.640 [2024-12-09 09:33:10.379698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:13.640 [2024-12-09 09:33:10.379730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.640 [2024-12-09 09:33:10.379749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:13.640 [2024-12-09 09:33:10.379782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.640 [2024-12-09 09:33:10.379801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:13.640 [2024-12-09 09:33:10.379826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.640 [2024-12-09 09:33:10.379844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:13.640 [2024-12-09 09:33:10.379870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.640 [2024-12-09 09:33:10.379888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:13.640 [2024-12-09 09:33:10.379914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.640 [2024-12-09 09:33:10.379932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:13.640 [2024-12-09 09:33:10.379957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.640 [2024-12-09 09:33:10.379975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:13.640 [2024-12-09 09:33:10.380001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.640 [2024-12-09 09:33:10.380019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:13.640 [2024-12-09 09:33:10.380046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.640 [2024-12-09 09:33:10.380064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:13.640 [2024-12-09 09:33:10.380089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.640 [2024-12-09 09:33:10.380108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:13.640 [2024-12-09 09:33:10.380138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.640 [2024-12-09 09:33:10.380163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:13.640 [2024-12-09 09:33:10.380198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.640 [2024-12-09 09:33:10.380217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:13.640 [2024-12-09 09:33:10.380243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.640 [2024-12-09 09:33:10.380261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:13.640 [2024-12-09 09:33:10.380287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.641 [2024-12-09 09:33:10.380305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:13.641 [2024-12-09 09:33:10.380331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.641 [2024-12-09 09:33:10.380357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:13.641 [2024-12-09 09:33:10.380385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.641 [2024-12-09 09:33:10.380411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:13.641 [2024-12-09 09:33:10.380446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:6768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.641 [2024-12-09 09:33:10.380476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:13.641 [2024-12-09 09:33:10.380502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.641 [2024-12-09 09:33:10.380521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.641 [2024-12-09 09:33:10.380546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:6784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.641 [2024-12-09 09:33:10.380565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:13.641 [2024-12-09 09:33:10.380590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.641 [2024-12-09 09:33:10.380608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:13.641 [2024-12-09 09:33:10.380634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.641 [2024-12-09 09:33:10.380652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:13.641 [2024-12-09 09:33:10.380678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.641 [2024-12-09 09:33:10.380697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:13.641 [2024-12-09 09:33:10.380722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.641 [2024-12-09 09:33:10.380741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:13.641 [2024-12-09 09:33:10.380766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.641 [2024-12-09 09:33:10.380784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:13.641 [2024-12-09 09:33:10.380809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.641 [2024-12-09 09:33:10.380827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:13.641 [2024-12-09 09:33:10.380853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.641 [2024-12-09 09:33:10.380871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:13.641 [2024-12-09 09:33:10.380896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.641 [2024-12-09 09:33:10.380914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:13.641 [2024-12-09 09:33:10.380948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.641 [2024-12-09 09:33:10.380977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:13.641 [2024-12-09 09:33:10.381011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.641 [2024-12-09 09:33:10.381031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:13.641 [2024-12-09 09:33:10.381057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.641 [2024-12-09 09:33:10.381075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:13.641 [2024-12-09 09:33:10.381101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.641 [2024-12-09 09:33:10.381120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:13.641 [2024-12-09 09:33:10.381145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.641 [2024-12-09 09:33:10.381163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:13.641 [2024-12-09 09:33:10.381193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.641 [2024-12-09 09:33:10.381218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:13.641 [2024-12-09 09:33:10.381244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.641 [2024-12-09 09:33:10.381262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:13.641 [2024-12-09 09:33:10.381287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.641 [2024-12-09 09:33:10.381305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:13.641 [2024-12-09 09:33:10.381331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.641 [2024-12-09 09:33:10.381349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:13.641 [2024-12-09 09:33:10.381375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.641 [2024-12-09 09:33:10.381393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:13.641 [2024-12-09 09:33:10.381418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.641 [2024-12-09 09:33:10.381437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:13.641 [2024-12-09 09:33:10.381474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.641 [2024-12-09 09:33:10.381494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:13.641 [2024-12-09 09:33:10.381527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.641 [2024-12-09 09:33:10.381545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:13.641 [2024-12-09 09:33:10.381571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.641 [2024-12-09 09:33:10.381590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:13.641 [2024-12-09 09:33:10.381639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.641 [2024-12-09 09:33:10.381659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:13.641 [2024-12-09 09:33:10.381685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.641 [2024-12-09 09:33:10.381703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:13.641 [2024-12-09 09:33:10.381728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.641 [2024-12-09 09:33:10.381747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:13.641 [2024-12-09 09:33:10.381781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.641 [2024-12-09 09:33:10.381807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:13.641 [2024-12-09 09:33:10.381841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.641 [2024-12-09 09:33:10.381863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:13.641 [2024-12-09 09:33:10.381893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.641 [2024-12-09 09:33:10.381915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:13.641 [2024-12-09 09:33:10.381944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.641 [2024-12-09 09:33:10.381970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:13.641 [2024-12-09 09:33:10.381999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.641 [2024-12-09 09:33:10.382021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:13.641 [2024-12-09 09:33:10.382062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:6904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.641 [2024-12-09 09:33:10.382082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.641 [2024-12-09 09:33:10.382108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.641 [2024-12-09 09:33:10.382126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:13.641 [2024-12-09 09:33:10.382152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.641 [2024-12-09 09:33:10.382181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:13.642 [2024-12-09 09:33:10.382207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.642 [2024-12-09 09:33:10.382225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:13.642 [2024-12-09 09:33:10.382251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.642 [2024-12-09 09:33:10.382270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:13.642 [2024-12-09 09:33:10.382296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.642 [2024-12-09 09:33:10.382314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:13.642 [2024-12-09 09:33:10.382339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.642 [2024-12-09 09:33:10.382357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:13.642 [2024-12-09 09:33:10.382383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.642 [2024-12-09 09:33:10.382401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:13.642 [2024-12-09 09:33:10.382427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:6968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.642 [2024-12-09 09:33:10.382445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:13.642 [2024-12-09 09:33:10.382483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.642 [2024-12-09 09:33:10.382502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:13.642 [2024-12-09 09:33:10.382528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.642 [2024-12-09 09:33:10.382554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:13.642 [2024-12-09 09:33:10.382580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.642 [2024-12-09 09:33:10.382599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:13.642 [2024-12-09 09:33:10.382624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.642 [2024-12-09 09:33:10.382643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:13.642 [2024-12-09 09:33:10.382674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.642 [2024-12-09 09:33:10.382694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:13.642 [2024-12-09 09:33:10.382719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.642 [2024-12-09 09:33:10.382744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:13.642 [2024-12-09 09:33:10.382770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.642 [2024-12-09 09:33:10.382789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:13.642 [2024-12-09 09:33:10.382814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.642 [2024-12-09 09:33:10.382833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:13.642 [2024-12-09 09:33:10.382858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.642 [2024-12-09 09:33:10.382876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:13.642 [2024-12-09 09:33:10.382902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:7624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.642 [2024-12-09 09:33:10.382921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:13.642 [2024-12-09 09:33:10.382946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.642 [2024-12-09 09:33:10.382964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:13.642 [2024-12-09 09:33:10.382990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:7000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.642 [2024-12-09 09:33:10.383011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:13.642 [2024-12-09 09:33:10.383037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.642 [2024-12-09 09:33:10.383056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:13.642 [2024-12-09 09:33:10.383081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.642 [2024-12-09 09:33:10.383100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:13.642 [2024-12-09 09:33:10.383125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.642 [2024-12-09 09:33:10.383144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:13.642 [2024-12-09 09:33:10.383169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:7032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.642 [2024-12-09 09:33:10.383187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:13.642 [2024-12-09 09:33:10.383213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:7040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.642 [2024-12-09 09:33:10.383234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:13.642 [2024-12-09 09:33:10.383271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.642 [2024-12-09 09:33:10.383291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:13.642 [2024-12-09 09:33:10.383326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.642 [2024-12-09 09:33:10.383357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:13.642 [2024-12-09 09:33:10.383384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.642 [2024-12-09 09:33:10.383403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:13.642 [2024-12-09 09:33:10.383429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.642 [2024-12-09 09:33:10.383447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:13.642 [2024-12-09 09:33:10.383484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.642 [2024-12-09 09:33:10.383503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:13.642 [2024-12-09 09:33:10.383529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.642 [2024-12-09 09:33:10.383547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.642 [2024-12-09 09:33:10.383573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.642 [2024-12-09 09:33:10.383592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.642 [2024-12-09 09:33:10.383617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.642 [2024-12-09 09:33:10.383636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:13.642 [2024-12-09 09:33:10.383661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.642 [2024-12-09 09:33:10.383679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:13.642 [2024-12-09 09:33:10.383705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.642 [2024-12-09 09:33:10.383723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:13.642 [2024-12-09 09:33:10.383749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.642 [2024-12-09 09:33:10.383768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:13.642 [2024-12-09 09:33:10.383793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.642 [2024-12-09 09:33:10.383812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:13.642 [2024-12-09 09:33:10.383837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.643 [2024-12-09 09:33:10.383855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:13.643 [2024-12-09 09:33:10.383888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.643 [2024-12-09 09:33:10.383906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:13.643 [2024-12-09 09:33:10.383932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.643 [2024-12-09 09:33:10.383952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:13.643 [2024-12-09 09:33:10.383983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.643 [2024-12-09 09:33:10.384002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:13.643 [2024-12-09 09:33:10.384027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.643 [2024-12-09 09:33:10.384046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:13.643 [2024-12-09 09:33:10.384071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.643 [2024-12-09 09:33:10.384089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:13.643 [2024-12-09 09:33:10.384115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.643 [2024-12-09 09:33:10.384133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:13.643 [2024-12-09 09:33:10.384166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:6656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.643 [2024-12-09 09:33:10.384194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:13.643 [2024-12-09 09:33:10.384223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.643 [2024-12-09 09:33:10.384241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:13.643 [2024-12-09 09:33:10.384267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.643 [2024-12-09 09:33:10.384286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:13.643 [2024-12-09 09:33:10.386486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.643 [2024-12-09 09:33:10.386522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:13.643 [2024-12-09 09:33:10.386569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.643 [2024-12-09 09:33:10.386594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:13.643 [2024-12-09 09:33:10.386630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.643 [2024-12-09 09:33:10.386658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:13.643 [2024-12-09 09:33:10.386685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.643 [2024-12-09 09:33:10.386714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:13.643 [2024-12-09 09:33:10.386740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.643 [2024-12-09 09:33:10.386759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:13.643 [2024-12-09 09:33:10.386784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.643 [2024-12-09 09:33:10.386802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:13.643 [2024-12-09 09:33:10.386828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.643 [2024-12-09 09:33:10.386846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:13.643 [2024-12-09 09:33:10.386871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.643 [2024-12-09 09:33:10.386890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:13.643 [2024-12-09 09:33:10.386915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.643 [2024-12-09 09:33:10.386934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:13.643 [2024-12-09 09:33:10.386959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.643 [2024-12-09 09:33:10.386977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:13.643 [2024-12-09 09:33:10.387003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.643 [2024-12-09 09:33:10.387021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:13.643 [2024-12-09 09:33:10.387047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.643 [2024-12-09 09:33:10.387065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:13.643 [2024-12-09 09:33:10.387091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.643 [2024-12-09 09:33:10.387109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:13.643 [2024-12-09 09:33:10.387134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.643 [2024-12-09 09:33:10.387153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:13.643 [2024-12-09 09:33:10.387178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.643 [2024-12-09 09:33:10.387197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:13.643 [2024-12-09 09:33:10.387222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.643 [2024-12-09 09:33:10.387246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:13.643 [2024-12-09 09:33:10.387272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.643 [2024-12-09 09:33:10.387290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.643 [2024-12-09 09:33:10.387316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.643 [2024-12-09 09:33:10.387335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:13.643 [2024-12-09 09:33:10.387360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.643 [2024-12-09 09:33:10.387378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:13.643 [2024-12-09 09:33:10.387413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.643 [2024-12-09 09:33:10.387432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:13.643 [2024-12-09 09:33:10.387471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.643 [2024-12-09 09:33:10.387490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:13.643 [2024-12-09 09:33:10.387516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.643 [2024-12-09 09:33:10.387535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:13.643 [2024-12-09 09:33:10.387560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.643 [2024-12-09 09:33:10.387579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:13.643 [2024-12-09 09:33:10.387605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.643 [2024-12-09 09:33:10.387623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:13.643 [2024-12-09 09:33:10.387649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.643 [2024-12-09 09:33:10.387667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:13.643 [2024-12-09 09:33:10.387693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.643 [2024-12-09 09:33:10.387711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:13.643 [2024-12-09 09:33:10.387736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.643 [2024-12-09 09:33:10.387755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:13.643 [2024-12-09 09:33:10.387780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.643 [2024-12-09 09:33:10.387798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:13.644 [2024-12-09 09:33:10.387831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.644 [2024-12-09 09:33:10.387850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:13.644 [2024-12-09 09:33:10.387875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:7328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.644 [2024-12-09 09:33:10.387893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:13.644 [2024-12-09 09:33:10.387920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.644 [2024-12-09 09:33:10.387938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:13.644 [2024-12-09 09:33:10.387970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.644 [2024-12-09 09:33:10.387989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:13.644 [2024-12-09 09:33:10.388014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.644 [2024-12-09 09:33:10.388033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:13.644 [2024-12-09 09:33:10.388058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.644 [2024-12-09 09:33:10.388076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:13.644 [2024-12-09 09:33:10.388102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.644 [2024-12-09 09:33:10.388120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:13.644 [2024-12-09 09:33:10.388149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.644 [2024-12-09 09:33:10.388179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:13.644 [2024-12-09 09:33:10.388205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.644 [2024-12-09 09:33:10.388224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:13.644 [2024-12-09 09:33:10.388249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.644 [2024-12-09 09:33:10.388268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:13.644 [2024-12-09 09:33:10.388293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.644 [2024-12-09 09:33:10.388312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:13.644 [2024-12-09 09:33:10.388338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.644 [2024-12-09 09:33:10.388356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:13.644 [2024-12-09 09:33:10.388388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.644 [2024-12-09 09:33:10.388407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:13.644 [2024-12-09 09:33:10.388432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.644 [2024-12-09 09:33:10.388451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:13.644 [2024-12-09 09:33:10.388496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.644 [2024-12-09 09:33:10.388511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:13.644 [2024-12-09 09:33:10.388531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.644 [2024-12-09 09:33:10.388545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:13.644 [2024-12-09 09:33:10.388565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.644 [2024-12-09 09:33:10.388580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:13.644 [2024-12-09 09:33:10.388599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.644 [2024-12-09 09:33:10.388614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:13.644 [2024-12-09 09:33:10.388634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.644 [2024-12-09 09:33:10.388649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:13.644 [2024-12-09 09:33:10.388669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:6768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.644 [2024-12-09 09:33:10.388683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:13.644 [2024-12-09 09:33:10.388704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.644 [2024-12-09 09:33:10.388718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.644 [2024-12-09 09:33:10.388738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.644 [2024-12-09 09:33:10.388752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:13.644 [2024-12-09 09:33:10.388772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.644 [2024-12-09 09:33:10.388787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:13.644 [2024-12-09 09:33:10.388813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.644 [2024-12-09 09:33:10.388829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:13.644 [2024-12-09 09:33:10.388849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.644 [2024-12-09 09:33:10.388869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:13.644 [2024-12-09 09:33:10.388890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.644 [2024-12-09 09:33:10.388904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:13.644 [2024-12-09 09:33:10.388924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.644 [2024-12-09 09:33:10.388939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:13.644 [2024-12-09 09:33:10.388958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.644 [2024-12-09 09:33:10.388973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:13.644 [2024-12-09 09:33:10.388993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.644 [2024-12-09 09:33:10.389007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:13.644 [2024-12-09 09:33:10.389028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.644 [2024-12-09 09:33:10.389042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:13.644 [2024-12-09 09:33:10.389062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.644 [2024-12-09 09:33:10.389076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:13.644 [2024-12-09 09:33:10.389096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.644 [2024-12-09 09:33:10.389110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:13.644 [2024-12-09 09:33:10.389130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.644 [2024-12-09 09:33:10.389145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:13.644 [2024-12-09 09:33:10.389165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.644 [2024-12-09 09:33:10.389179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:13.644 [2024-12-09 09:33:10.389199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.644 [2024-12-09 09:33:10.389214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:13.644 [2024-12-09 09:33:10.389234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.645 [2024-12-09 09:33:10.389248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:13.645 [2024-12-09 09:33:10.389268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.645 [2024-12-09 09:33:10.389287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:13.645 [2024-12-09 09:33:10.389307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.645 [2024-12-09 09:33:10.389322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:13.645 [2024-12-09 09:33:10.389341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.645 [2024-12-09 09:33:10.389356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:13.645 [2024-12-09 09:33:10.389376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.645 [2024-12-09 09:33:10.389391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:13.645 [2024-12-09 09:33:10.389410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.645 [2024-12-09 09:33:10.389425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:13.645 [2024-12-09 09:33:10.389445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.645 [2024-12-09 09:33:10.389459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:13.645 [2024-12-09 09:33:10.389488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.645 [2024-12-09 09:33:10.389504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:13.645 [2024-12-09 09:33:10.389543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.645 [2024-12-09 09:33:10.389559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:13.645 [2024-12-09 09:33:10.389579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.645 [2024-12-09 09:33:10.389593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:13.645 [2024-12-09 09:33:10.389623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.645 [2024-12-09 09:33:10.389645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:13.645 [2024-12-09 09:33:10.389670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.645 [2024-12-09 09:33:10.389688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:13.645 [2024-12-09 09:33:10.389711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.645 [2024-12-09 09:33:10.389729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:13.645 [2024-12-09 09:33:10.389751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.645 [2024-12-09 09:33:10.389768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:13.645 [2024-12-09 09:33:10.389798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.645 [2024-12-09 09:33:10.389816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:13.645 [2024-12-09 09:33:10.389839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:6888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.645 [2024-12-09 09:33:10.389856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:13.645 [2024-12-09 09:33:10.389879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.645 [2024-12-09 09:33:10.389896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:13.645 [2024-12-09 09:33:10.389919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:6904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.645 [2024-12-09 09:33:10.389936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.645 [2024-12-09 09:33:10.389959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.645 [2024-12-09 09:33:10.389976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:13.645 [2024-12-09 09:33:10.389999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.645 [2024-12-09 09:33:10.390016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:13.645 [2024-12-09 09:33:10.390039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.645 [2024-12-09 09:33:10.390068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:13.645 [2024-12-09 09:33:10.390089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.645 [2024-12-09 09:33:10.390103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:13.645 [2024-12-09 09:33:10.390123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.645 [2024-12-09 09:33:10.390138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:13.645 [2024-12-09 09:33:10.390158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.645 [2024-12-09 09:33:10.390172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:13.645 [2024-12-09 09:33:10.390192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.645 [2024-12-09 09:33:10.390207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:13.645 [2024-12-09 09:33:10.390227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.645 [2024-12-09 09:33:10.390241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:13.645 [2024-12-09 09:33:10.390261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.645 [2024-12-09 09:33:10.390281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:13.645 [2024-12-09 09:33:10.390301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.645 [2024-12-09 09:33:10.390315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:13.645 [2024-12-09 09:33:10.390335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.645 [2024-12-09 09:33:10.390350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:13.645 [2024-12-09 09:33:10.390369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.645 [2024-12-09 09:33:10.390384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:13.645 [2024-12-09 09:33:10.390404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.645 [2024-12-09 09:33:10.390419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:13.645 [2024-12-09 09:33:10.390440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.645 [2024-12-09 09:33:10.390454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:13.645 [2024-12-09 09:33:10.390491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.645 [2024-12-09 09:33:10.390514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:13.645 [2024-12-09 09:33:10.390537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.645 [2024-12-09 09:33:10.390552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:13.645 [2024-12-09 09:33:10.390572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.645 [2024-12-09 09:33:10.390586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:13.645 [2024-12-09 09:33:10.390606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.645 [2024-12-09 09:33:10.390620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:13.645 [2024-12-09 09:33:10.390640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.645 [2024-12-09 09:33:10.390655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:13.645 [2024-12-09 09:33:10.390675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.645 [2024-12-09 09:33:10.390689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:13.645 [2024-12-09 09:33:10.390709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.645 [2024-12-09 09:33:10.390729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:13.646 [2024-12-09 09:33:10.390749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.646 [2024-12-09 09:33:10.390764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:13.646 [2024-12-09 09:33:10.390784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.646 [2024-12-09 09:33:10.390798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:13.646 [2024-12-09 09:33:10.390818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.646 [2024-12-09 09:33:10.390833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:13.646 [2024-12-09 09:33:10.390853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:7040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.646 [2024-12-09 09:33:10.390867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:13.646 [2024-12-09 09:33:10.390887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.646 [2024-12-09 09:33:10.390901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:13.646 [2024-12-09 09:33:10.390921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.646 [2024-12-09 09:33:10.390936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:13.646 [2024-12-09 09:33:10.390956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.646 [2024-12-09 09:33:10.390971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:13.646 [2024-12-09 09:33:10.390990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.646 [2024-12-09 09:33:10.391005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:13.646 [2024-12-09 09:33:10.391025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.646 [2024-12-09 09:33:10.391040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:13.646 [2024-12-09 09:33:10.391064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.646 [2024-12-09 09:33:10.391079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.646 [2024-12-09 09:33:10.391099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.646 [2024-12-09 09:33:10.391113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.646 [2024-12-09 09:33:10.391133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.646 [2024-12-09 09:33:10.391148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:13.646 [2024-12-09 09:33:10.391174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.646 [2024-12-09 09:33:10.391189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:13.646 [2024-12-09 09:33:10.391209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.646 [2024-12-09 09:33:10.391223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:13.646 [2024-12-09 09:33:10.391248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.646 [2024-12-09 09:33:10.391264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:13.646 [2024-12-09 09:33:10.391284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.646 [2024-12-09 09:33:10.391298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:13.646 [2024-12-09 09:33:10.391318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.646 [2024-12-09 09:33:10.391332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:13.646 [2024-12-09 09:33:10.391353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.646 [2024-12-09 09:33:10.391367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:13.646 [2024-12-09 09:33:10.391387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.646 [2024-12-09 09:33:10.391402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:13.646 [2024-12-09 09:33:10.391421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.646 [2024-12-09 09:33:10.391437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:13.646 [2024-12-09 09:33:10.391456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.646 [2024-12-09 09:33:10.391481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:13.646 [2024-12-09 09:33:10.391501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.646 [2024-12-09 09:33:10.391515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:13.646 [2024-12-09 09:33:10.391535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:6648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.646 [2024-12-09 09:33:10.391550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:13.646 [2024-12-09 09:33:10.391569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:6656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.646 [2024-12-09 09:33:10.391584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:13.646 [2024-12-09 09:33:10.391611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.646 [2024-12-09 09:33:10.391626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:13.646 [2024-12-09 09:33:10.391979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.646 [2024-12-09 09:33:10.392003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:13.646 9885.47 IOPS, 38.62 MiB/s [2024-12-09T09:33:51.369Z] 9459.69 IOPS, 36.95 MiB/s [2024-12-09T09:33:51.369Z] 9400.18 IOPS, 36.72 MiB/s [2024-12-09T09:33:51.369Z] 9377.94 IOPS, 36.63 MiB/s [2024-12-09T09:33:51.369Z] 9494.05 IOPS, 37.09 MiB/s [2024-12-09T09:33:51.369Z] 9597.35 IOPS, 37.49 MiB/s [2024-12-09T09:33:51.369Z] 9672.14 IOPS, 37.78 MiB/s [2024-12-09T09:33:51.369Z] [2024-12-09 09:33:17.283776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.646 [2024-12-09 09:33:17.283842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:13.646 [2024-12-09 09:33:17.283888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.646 [2024-12-09 09:33:17.283903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:13.646 [2024-12-09 09:33:17.283922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.646 [2024-12-09 09:33:17.283935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:13.646 [2024-12-09 09:33:17.283953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.647 [2024-12-09 09:33:17.283965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:13.647 [2024-12-09 09:33:17.283983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:99832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.647 [2024-12-09 09:33:17.283996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:13.647 [2024-12-09 09:33:17.284014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.647 [2024-12-09 09:33:17.284026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:13.647 [2024-12-09 09:33:17.284045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.647 [2024-12-09 09:33:17.284057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:13.647 [2024-12-09 09:33:17.284075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:99856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.647 [2024-12-09 09:33:17.284088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:13.647 [2024-12-09 09:33:17.284105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:99352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.647 [2024-12-09 09:33:17.284118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:13.647 [2024-12-09 09:33:17.284136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:99360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.647 [2024-12-09 09:33:17.284174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:13.647 [2024-12-09 09:33:17.284193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.647 [2024-12-09 09:33:17.284206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:13.647 [2024-12-09 09:33:17.284224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.647 [2024-12-09 09:33:17.284237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:13.647 [2024-12-09 09:33:17.284255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:99384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.647 [2024-12-09 09:33:17.284268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:13.647 [2024-12-09 09:33:17.284286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.647 [2024-12-09 09:33:17.284299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:13.647 [2024-12-09 09:33:17.284317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:99400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.647 [2024-12-09 09:33:17.284330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:13.647 [2024-12-09 09:33:17.284348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:99408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.647 [2024-12-09 09:33:17.284361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:13.647 [2024-12-09 09:33:17.284379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:99416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.647 [2024-12-09 09:33:17.284392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:13.647 [2024-12-09 09:33:17.284412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.647 [2024-12-09 09:33:17.284424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:13.647 [2024-12-09 09:33:17.284442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.647 [2024-12-09 09:33:17.284455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:13.647 [2024-12-09 09:33:17.284484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.647 [2024-12-09 09:33:17.284497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:13.647 [2024-12-09 09:33:17.284515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:99448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.647 [2024-12-09 09:33:17.284527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:13.647 [2024-12-09 09:33:17.284545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:99456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.647 [2024-12-09 09:33:17.284558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:13.647 [2024-12-09 09:33:17.284582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.647 [2024-12-09 09:33:17.284595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:13.647 [2024-12-09 09:33:17.284613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.647 [2024-12-09 09:33:17.284626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:13.647 [2024-12-09 09:33:17.284648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:99864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.647 [2024-12-09 09:33:17.284661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:13.647 [2024-12-09 09:33:17.284679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.647 [2024-12-09 09:33:17.284692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:13.647 [2024-12-09 09:33:17.284710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.647 [2024-12-09 09:33:17.284723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:13.647 [2024-12-09 09:33:17.284742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.647 [2024-12-09 09:33:17.284755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.647 [2024-12-09 09:33:17.284774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.647 [2024-12-09 09:33:17.284787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:13.647 [2024-12-09 09:33:17.284805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:99904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.647 [2024-12-09 09:33:17.284818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:13.647 [2024-12-09 09:33:17.284836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.647 [2024-12-09 09:33:17.284849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:13.647 [2024-12-09 09:33:17.284867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:99920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.647 [2024-12-09 09:33:17.284880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:13.647 [2024-12-09 09:33:17.284899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.647 [2024-12-09 09:33:17.284912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:13.647 [2024-12-09 09:33:17.284930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:99488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.647 [2024-12-09 09:33:17.284944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:13.647 [2024-12-09 09:33:17.284967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.647 [2024-12-09 09:33:17.284981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:13.647 [2024-12-09 09:33:17.284999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.647 [2024-12-09 09:33:17.285013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:13.647 [2024-12-09 09:33:17.285031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:99512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.647 [2024-12-09 09:33:17.285044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:13.647 [2024-12-09 09:33:17.285063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.647 [2024-12-09 09:33:17.285076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:13.647 [2024-12-09 09:33:17.285095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:99528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.647 [2024-12-09 09:33:17.285107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:13.647 [2024-12-09 09:33:17.285126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:99536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.647 [2024-12-09 09:33:17.285138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:13.647 [2024-12-09 09:33:17.285157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:99544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.647 [2024-12-09 09:33:17.285170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:13.647 [2024-12-09 09:33:17.285188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.648 [2024-12-09 09:33:17.285201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:13.648 [2024-12-09 09:33:17.285219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:99560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.648 [2024-12-09 09:33:17.285232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:13.648 [2024-12-09 09:33:17.285250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.648 [2024-12-09 09:33:17.285263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:13.648 [2024-12-09 09:33:17.285281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:99576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.648 [2024-12-09 09:33:17.285294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:13.648 [2024-12-09 09:33:17.285313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:99584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.648 [2024-12-09 09:33:17.285326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:13.648 [2024-12-09 09:33:17.285344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.648 [2024-12-09 09:33:17.285362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:13.648 [2024-12-09 09:33:17.285380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.648 [2024-12-09 09:33:17.285393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:13.648 [2024-12-09 09:33:17.285411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.648 [2024-12-09 09:33:17.285425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:13.648 [2024-12-09 09:33:17.285443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:99936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.648 [2024-12-09 09:33:17.285456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:13.648 [2024-12-09 09:33:17.285482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:99944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.648 [2024-12-09 09:33:17.285495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:13.648 [2024-12-09 09:33:17.285514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.648 [2024-12-09 09:33:17.285526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:13.648 [2024-12-09 09:33:17.285544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.648 [2024-12-09 09:33:17.285557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:13.648 [2024-12-09 09:33:17.285576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:99968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.648 [2024-12-09 09:33:17.285588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:13.648 [2024-12-09 09:33:17.285606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.648 [2024-12-09 09:33:17.285619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:13.648 [2024-12-09 09:33:17.285637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.648 [2024-12-09 09:33:17.285650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:13.648 [2024-12-09 09:33:17.285668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.648 [2024-12-09 09:33:17.285681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:13.648 [2024-12-09 09:33:17.285700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.648 [2024-12-09 09:33:17.285712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:13.648 [2024-12-09 09:33:17.285730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.648 [2024-12-09 09:33:17.285748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.648 [2024-12-09 09:33:17.285766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:99632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.648 [2024-12-09 09:33:17.285779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.648 [2024-12-09 09:33:17.285798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.648 [2024-12-09 09:33:17.285810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:13.648 [2024-12-09 09:33:17.285829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.648 [2024-12-09 09:33:17.285841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:13.648 [2024-12-09 09:33:17.285860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.648 [2024-12-09 09:33:17.285873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:13.648 [2024-12-09 09:33:17.285891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.648 [2024-12-09 09:33:17.285904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:13.648 [2024-12-09 09:33:17.285937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.648 [2024-12-09 09:33:17.285952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:13.648 [2024-12-09 09:33:17.285970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.648 [2024-12-09 09:33:17.285984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:13.648 [2024-12-09 09:33:17.286002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.648 [2024-12-09 09:33:17.286015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:13.648 [2024-12-09 09:33:17.286033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.648 [2024-12-09 09:33:17.286054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:13.648 [2024-12-09 09:33:17.286073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.648 [2024-12-09 09:33:17.286086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:13.648 [2024-12-09 09:33:17.286105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.648 [2024-12-09 09:33:17.286118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:13.648 [2024-12-09 09:33:17.286136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.648 [2024-12-09 09:33:17.286149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:13.648 [2024-12-09 09:33:17.286173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.648 [2024-12-09 09:33:17.286186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:13.648 [2024-12-09 09:33:17.286204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.648 [2024-12-09 09:33:17.286217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:13.648 [2024-12-09 09:33:17.286235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.648 [2024-12-09 09:33:17.286248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:13.648 [2024-12-09 09:33:17.286267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.648 [2024-12-09 09:33:17.286279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:13.649 [2024-12-09 09:33:17.286297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.649 [2024-12-09 09:33:17.286310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:13.649 [2024-12-09 09:33:17.286328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.649 [2024-12-09 09:33:17.286341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:13.649 [2024-12-09 09:33:17.286359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.649 [2024-12-09 09:33:17.286372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:13.649 [2024-12-09 09:33:17.286390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.649 [2024-12-09 09:33:17.286403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:13.649 [2024-12-09 09:33:17.286421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.649 [2024-12-09 09:33:17.286434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:13.649 [2024-12-09 09:33:17.286452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:99672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.649 [2024-12-09 09:33:17.286474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:13.649 [2024-12-09 09:33:17.286493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.649 [2024-12-09 09:33:17.286506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:13.649 [2024-12-09 09:33:17.286525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.649 [2024-12-09 09:33:17.286538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:13.649 [2024-12-09 09:33:17.286561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.649 [2024-12-09 09:33:17.286575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:13.649 [2024-12-09 09:33:17.286593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.649 [2024-12-09 09:33:17.286606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:13.649 [2024-12-09 09:33:17.286625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.649 [2024-12-09 09:33:17.286638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:13.649 [2024-12-09 09:33:17.286656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.649 [2024-12-09 09:33:17.286669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:13.649 [2024-12-09 09:33:17.286687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.649 [2024-12-09 09:33:17.286700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:13.649 [2024-12-09 09:33:17.286718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.649 [2024-12-09 09:33:17.286730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:13.649 [2024-12-09 09:33:17.286749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.649 [2024-12-09 09:33:17.286767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:13.649 [2024-12-09 09:33:17.286785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.649 [2024-12-09 09:33:17.286799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:13.649 [2024-12-09 09:33:17.286817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.649 [2024-12-09 09:33:17.286830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.649 [2024-12-09 09:33:17.286848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.649 [2024-12-09 09:33:17.286861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:13.649 [2024-12-09 09:33:17.286879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.649 [2024-12-09 09:33:17.286892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:13.649 [2024-12-09 09:33:17.286910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.649 [2024-12-09 09:33:17.286923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:13.649 [2024-12-09 09:33:17.286945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.649 [2024-12-09 09:33:17.286959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:13.649 [2024-12-09 09:33:17.286977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.649 [2024-12-09 09:33:17.286990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:13.649 [2024-12-09 09:33:17.287011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.649 [2024-12-09 09:33:17.287024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:13.649 [2024-12-09 09:33:17.287042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.649 [2024-12-09 09:33:17.287055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:13.649 [2024-12-09 09:33:17.287073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.649 [2024-12-09 09:33:17.287086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:13.649 [2024-12-09 09:33:17.287104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.649 [2024-12-09 09:33:17.287117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:13.649 [2024-12-09 09:33:17.287135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.649 [2024-12-09 09:33:17.287148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:13.649 [2024-12-09 09:33:17.287166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.649 [2024-12-09 09:33:17.287179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:13.649 [2024-12-09 09:33:17.287745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.649 [2024-12-09 09:33:17.287769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:13.649 [2024-12-09 09:33:17.287795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.649 [2024-12-09 09:33:17.287808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:13.649 [2024-12-09 09:33:17.287832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.649 [2024-12-09 09:33:17.287847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:13.649 [2024-12-09 09:33:17.287870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.649 [2024-12-09 09:33:17.287884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:13.649 [2024-12-09 09:33:17.287907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.649 [2024-12-09 09:33:17.287928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:13.649 [2024-12-09 09:33:17.287951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.649 [2024-12-09 09:33:17.287964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:13.649 [2024-12-09 09:33:17.287988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.649 [2024-12-09 09:33:17.288001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:13.649 [2024-12-09 09:33:17.288024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.649 [2024-12-09 09:33:17.288037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:13.649 [2024-12-09 09:33:17.288068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.649 [2024-12-09 09:33:17.288082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:13.649 [2024-12-09 09:33:17.288105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.649 [2024-12-09 09:33:17.288119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:13.649 [2024-12-09 09:33:17.288143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.650 [2024-12-09 09:33:17.288156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:13.650 [2024-12-09 09:33:17.288180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.650 [2024-12-09 09:33:17.288193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:13.650 [2024-12-09 09:33:17.288216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.650 [2024-12-09 09:33:17.288229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:13.650 [2024-12-09 09:33:17.288252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.650 [2024-12-09 09:33:17.288264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:13.650 [2024-12-09 09:33:17.288287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.650 [2024-12-09 09:33:17.288300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:13.650 [2024-12-09 09:33:17.288323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.650 [2024-12-09 09:33:17.288336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:13.650 [2024-12-09 09:33:17.288367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.650 [2024-12-09 09:33:17.288388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:13.650 [2024-12-09 09:33:17.288412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.650 [2024-12-09 09:33:17.288426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:13.650 [2024-12-09 09:33:17.288448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.650 [2024-12-09 09:33:17.288473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:13.650 [2024-12-09 09:33:17.288497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.650 [2024-12-09 09:33:17.288510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:13.650 [2024-12-09 09:33:17.288533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.650 [2024-12-09 09:33:17.288546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.650 [2024-12-09 09:33:17.288569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.650 [2024-12-09 09:33:17.288582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:13.650 [2024-12-09 09:33:17.288606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:100352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.650 [2024-12-09 09:33:17.288619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:13.650 [2024-12-09 09:33:17.288642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.650 [2024-12-09 09:33:17.288655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:13.650 [2024-12-09 09:33:17.288678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.650 [2024-12-09 09:33:17.288691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:13.650 9501.23 IOPS, 37.11 MiB/s [2024-12-09T09:33:51.373Z] 9088.13 IOPS, 35.50 MiB/s [2024-12-09T09:33:51.373Z] 8709.46 IOPS, 34.02 MiB/s [2024-12-09T09:33:51.373Z] 8361.08 IOPS, 32.66 MiB/s [2024-12-09T09:33:51.373Z] 8039.50 IOPS, 31.40 MiB/s [2024-12-09T09:33:51.373Z] 7741.74 IOPS, 30.24 MiB/s [2024-12-09T09:33:51.373Z] 7465.25 IOPS, 29.16 MiB/s [2024-12-09T09:33:51.373Z] 7384.76 IOPS, 28.85 MiB/s [2024-12-09T09:33:51.373Z] 7518.80 IOPS, 29.37 MiB/s [2024-12-09T09:33:51.373Z] 7645.23 IOPS, 29.86 MiB/s [2024-12-09T09:33:51.373Z] 7763.56 IOPS, 30.33 MiB/s [2024-12-09T09:33:51.373Z] 7873.52 IOPS, 30.76 MiB/s [2024-12-09T09:33:51.373Z] 7977.00 IOPS, 31.16 MiB/s [2024-12-09T09:33:51.373Z] [2024-12-09 09:33:30.421113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.650 [2024-12-09 09:33:30.421161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:13.650 [2024-12-09 09:33:30.421207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.650 [2024-12-09 09:33:30.421222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:13.650 [2024-12-09 09:33:30.421241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.650 [2024-12-09 09:33:30.421277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:13.650 [2024-12-09 09:33:30.421296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.650 [2024-12-09 09:33:30.421309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:13.650 [2024-12-09 09:33:30.421327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.650 [2024-12-09 09:33:30.421340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:13.650 [2024-12-09 09:33:30.421358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.650 [2024-12-09 09:33:30.421371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:13.650 [2024-12-09 09:33:30.421389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.650 [2024-12-09 09:33:30.421402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:13.650 [2024-12-09 09:33:30.421420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.650 [2024-12-09 09:33:30.421433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:13.650 [2024-12-09 09:33:30.421451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.650 [2024-12-09 09:33:30.421477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:13.650 [2024-12-09 09:33:30.421495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.650 [2024-12-09 09:33:30.421508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:13.650 [2024-12-09 09:33:30.421526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:5280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.650 [2024-12-09 09:33:30.421539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:13.650 [2024-12-09 09:33:30.421557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.650 [2024-12-09 09:33:30.421569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:13.650 [2024-12-09 09:33:30.421588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.650 [2024-12-09 09:33:30.421600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:13.650 [2024-12-09 09:33:30.421619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.650 [2024-12-09 09:33:30.421631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:13.650 [2024-12-09 09:33:30.421649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.650 [2024-12-09 09:33:30.421662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:13.650 [2024-12-09 09:33:30.421687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.650 [2024-12-09 09:33:30.421700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:13.650 [2024-12-09 09:33:30.421719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.650 [2024-12-09 09:33:30.421731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:13.650 [2024-12-09 09:33:30.421752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.650 [2024-12-09 09:33:30.421766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:13.650 [2024-12-09 09:33:30.421784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.650 [2024-12-09 09:33:30.421797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:13.650 [2024-12-09 09:33:30.421815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.650 [2024-12-09 09:33:30.421828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:13.650 [2024-12-09 09:33:30.421846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.650 [2024-12-09 09:33:30.421859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:13.651 [2024-12-09 09:33:30.421877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.651 [2024-12-09 09:33:30.421890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:13.651 [2024-12-09 09:33:30.421908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.651 [2024-12-09 09:33:30.421921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:13.651 [2024-12-09 09:33:30.421939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.651 [2024-12-09 09:33:30.421952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:13.651 [2024-12-09 09:33:30.421994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.651 [2024-12-09 09:33:30.422009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.651 [2024-12-09 09:33:30.422023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.651 [2024-12-09 09:33:30.422035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.651 [2024-12-09 09:33:30.422059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.651 [2024-12-09 09:33:30.422071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.651 [2024-12-09 09:33:30.422091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.651 [2024-12-09 09:33:30.422104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.651 [2024-12-09 09:33:30.422118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.651 [2024-12-09 09:33:30.422131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.651 [2024-12-09 09:33:30.422145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.651 [2024-12-09 09:33:30.422157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.651 [2024-12-09 09:33:30.422171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.651 [2024-12-09 09:33:30.422184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.651 [2024-12-09 09:33:30.422198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.651 [2024-12-09 09:33:30.422210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.651 [2024-12-09 09:33:30.422224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.651 [2024-12-09 09:33:30.422237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.651 [2024-12-09 09:33:30.422252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.651 [2024-12-09 09:33:30.422264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.651 [2024-12-09 09:33:30.422278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.651 [2024-12-09 09:33:30.422290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.651 [2024-12-09 09:33:30.422304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.651 [2024-12-09 09:33:30.422317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.651 [2024-12-09 09:33:30.422331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.651 [2024-12-09 09:33:30.422343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.651 [2024-12-09 09:33:30.422357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.651 [2024-12-09 09:33:30.422370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.651 [2024-12-09 09:33:30.422383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.651 [2024-12-09 09:33:30.422396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.651 [2024-12-09 09:33:30.422410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.651 [2024-12-09 09:33:30.422422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.651 [2024-12-09 09:33:30.422440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.651 [2024-12-09 09:33:30.422453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.651 [2024-12-09 09:33:30.422476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.651 [2024-12-09 09:33:30.422489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.651 [2024-12-09 09:33:30.422503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.651 [2024-12-09 09:33:30.422515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.651 [2024-12-09 09:33:30.422529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.651 [2024-12-09 09:33:30.422542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.651 [2024-12-09 09:33:30.422556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.651 [2024-12-09 09:33:30.422568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.651 [2024-12-09 09:33:30.422582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:5464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.651 [2024-12-09 09:33:30.422594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.651 [2024-12-09 09:33:30.422608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.651 [2024-12-09 09:33:30.422621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.651 [2024-12-09 09:33:30.422635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.651 [2024-12-09 09:33:30.422648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.651 [2024-12-09 09:33:30.422661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.651 [2024-12-09 09:33:30.422674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.651 [2024-12-09 09:33:30.422689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.651 [2024-12-09 09:33:30.422701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.651 [2024-12-09 09:33:30.422715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.651 [2024-12-09 09:33:30.422728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.651 [2024-12-09 09:33:30.422741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.651 [2024-12-09 09:33:30.422754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.651 [2024-12-09 09:33:30.422768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.651 [2024-12-09 09:33:30.422784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.651 [2024-12-09 09:33:30.422798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.651 [2024-12-09 09:33:30.422811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.652 [2024-12-09 09:33:30.422825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.652 [2024-12-09 09:33:30.422837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.652 [2024-12-09 09:33:30.422851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.652 [2024-12-09 09:33:30.422863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.652 [2024-12-09 09:33:30.422877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.652 [2024-12-09 09:33:30.422889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.652 [2024-12-09 09:33:30.422903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.652 [2024-12-09 09:33:30.422915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.652 [2024-12-09 09:33:30.422929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.652 [2024-12-09 09:33:30.422942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.652 [2024-12-09 09:33:30.422956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.652 [2024-12-09 09:33:30.422968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.652 [2024-12-09 09:33:30.422982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.652 [2024-12-09 09:33:30.422995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.652 [2024-12-09 09:33:30.423008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.652 [2024-12-09 09:33:30.423021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.652 [2024-12-09 09:33:30.423035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.652 [2024-12-09 09:33:30.423048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.652 [2024-12-09 09:33:30.423062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.652 [2024-12-09 09:33:30.423074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.652 [2024-12-09 09:33:30.423088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.652 [2024-12-09 09:33:30.423100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.652 [2024-12-09 09:33:30.423121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.652 [2024-12-09 09:33:30.423134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.652 [2024-12-09 09:33:30.423148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.652 [2024-12-09 09:33:30.423160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.652 [2024-12-09 09:33:30.423174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.652 [2024-12-09 09:33:30.423186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.652 [2024-12-09 09:33:30.423200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.652 [2024-12-09 09:33:30.423213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.652 [2024-12-09 09:33:30.423227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.652 [2024-12-09 09:33:30.423240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.652 [2024-12-09 09:33:30.423254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:5536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.652 [2024-12-09 09:33:30.423267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.652 [2024-12-09 09:33:30.423281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.652 [2024-12-09 09:33:30.423293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.652 [2024-12-09 09:33:30.423307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.652 [2024-12-09 09:33:30.423319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.652 [2024-12-09 09:33:30.423333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.652 [2024-12-09 09:33:30.423346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.652 [2024-12-09 09:33:30.423359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.652 [2024-12-09 09:33:30.423373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.652 [2024-12-09 09:33:30.423387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.652 [2024-12-09 09:33:30.423400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.652 [2024-12-09 09:33:30.423414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.652 [2024-12-09 09:33:30.423427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.652 [2024-12-09 09:33:30.423441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.652 [2024-12-09 09:33:30.423454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.652 [2024-12-09 09:33:30.423479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.652 [2024-12-09 09:33:30.423493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.652 [2024-12-09 09:33:30.423507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.652 [2024-12-09 09:33:30.423520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.652 [2024-12-09 09:33:30.423534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:6000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.652 [2024-12-09 09:33:30.423547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.652 [2024-12-09 09:33:30.423561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.652 [2024-12-09 09:33:30.423573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.652 [2024-12-09 09:33:30.423587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:6016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.652 [2024-12-09 09:33:30.423600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.652 [2024-12-09 09:33:30.423614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:6024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.652 [2024-12-09 09:33:30.423626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.652 [2024-12-09 09:33:30.423640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.652 [2024-12-09 09:33:30.423652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.652 [2024-12-09 09:33:30.423667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.652 [2024-12-09 09:33:30.423679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.652 [2024-12-09 09:33:30.423693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.652 [2024-12-09 09:33:30.423707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.652 [2024-12-09 09:33:30.423720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.652 [2024-12-09 09:33:30.423733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.652 [2024-12-09 09:33:30.423747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.652 [2024-12-09 09:33:30.423760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.652 [2024-12-09 09:33:30.423773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.652 [2024-12-09 09:33:30.423786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.652 [2024-12-09 09:33:30.423800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.652 [2024-12-09 09:33:30.423818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.652 [2024-12-09 09:33:30.423832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.652 [2024-12-09 09:33:30.423844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.652 [2024-12-09 09:33:30.423858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.652 [2024-12-09 09:33:30.423871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.652 [2024-12-09 09:33:30.423885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.653 [2024-12-09 09:33:30.423897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.653 [2024-12-09 09:33:30.423911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.653 [2024-12-09 09:33:30.423928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.653 [2024-12-09 09:33:30.423942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.653 [2024-12-09 09:33:30.423955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.653 [2024-12-09 09:33:30.423969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.653 [2024-12-09 09:33:30.423982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.653 [2024-12-09 09:33:30.423996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.653 [2024-12-09 09:33:30.424009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.653 [2024-12-09 09:33:30.424023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.653 [2024-12-09 09:33:30.424036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.653 [2024-12-09 09:33:30.424049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1959aa0 is same with the state(6) to be set 00:23:13.653 [2024-12-09 09:33:30.424065] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.653 [2024-12-09 09:33:30.424074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.653 [2024-12-09 09:33:30.424084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5704 len:8 PRP1 0x0 PRP2 0x0 00:23:13.653 [2024-12-09 09:33:30.424096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.653 [2024-12-09 09:33:30.424110] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.653 [2024-12-09 09:33:30.424119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.653 [2024-12-09 09:33:30.424128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6032 len:8 PRP1 0x0 PRP2 0x0 00:23:13.653 [2024-12-09 09:33:30.424141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.653 [2024-12-09 09:33:30.424154] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.653 [2024-12-09 09:33:30.424166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.653 [2024-12-09 09:33:30.424176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6040 len:8 PRP1 0x0 PRP2 0x0 00:23:13.653 [2024-12-09 09:33:30.424188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.653 [2024-12-09 09:33:30.424201] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.653 [2024-12-09 09:33:30.424210] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.653 [2024-12-09 09:33:30.424219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:8 PRP1 0x0 PRP2 0x0 00:23:13.653 [2024-12-09 09:33:30.424231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.653 [2024-12-09 09:33:30.424244] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.653 [2024-12-09 09:33:30.424253] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.653 [2024-12-09 09:33:30.424262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6056 len:8 PRP1 0x0 PRP2 0x0 00:23:13.653 [2024-12-09 09:33:30.424275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.653 [2024-12-09 09:33:30.424287] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.653 [2024-12-09 09:33:30.424296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.653 [2024-12-09 09:33:30.424307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6064 len:8 PRP1 0x0 PRP2 0x0 00:23:13.653 [2024-12-09 09:33:30.424320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.653 [2024-12-09 09:33:30.424332] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.653 [2024-12-09 09:33:30.424342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.653 [2024-12-09 09:33:30.424351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6072 len:8 PRP1 0x0 PRP2 0x0 00:23:13.653 [2024-12-09 09:33:30.424364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.653 [2024-12-09 09:33:30.424376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.653 [2024-12-09 09:33:30.424385] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.653 [2024-12-09 09:33:30.424395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:8 PRP1 0x0 PRP2 0x0 00:23:13.653 [2024-12-09 09:33:30.424412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.653 [2024-12-09 09:33:30.424425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.653 [2024-12-09 09:33:30.424434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.653 [2024-12-09 09:33:30.424444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6088 len:8 PRP1 0x0 PRP2 0x0 00:23:13.653 [2024-12-09 09:33:30.424456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.653 [2024-12-09 09:33:30.424476] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.653 [2024-12-09 09:33:30.424485] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.653 [2024-12-09 09:33:30.424495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6096 len:8 PRP1 0x0 PRP2 0x0 00:23:13.653 [2024-12-09 09:33:30.424507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.653 [2024-12-09 09:33:30.424524] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.653 [2024-12-09 09:33:30.424533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.653 [2024-12-09 09:33:30.424543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6104 len:8 PRP1 0x0 PRP2 0x0 00:23:13.653 [2024-12-09 09:33:30.424555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.653 [2024-12-09 09:33:30.424567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.653 [2024-12-09 09:33:30.424576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.653 [2024-12-09 09:33:30.424586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:8 PRP1 0x0 PRP2 0x0 00:23:13.653 [2024-12-09 09:33:30.424598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.653 [2024-12-09 09:33:30.424611] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.653 [2024-12-09 09:33:30.424620] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.653 [2024-12-09 09:33:30.424629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6120 len:8 PRP1 0x0 PRP2 0x0 00:23:13.653 [2024-12-09 09:33:30.424642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.653 [2024-12-09 09:33:30.424654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.653 [2024-12-09 09:33:30.424663] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.653 [2024-12-09 09:33:30.424674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6128 len:8 PRP1 0x0 PRP2 0x0 00:23:13.653 [2024-12-09 09:33:30.424686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.653 [2024-12-09 09:33:30.424699] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.653 [2024-12-09 09:33:30.424708] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.653 [2024-12-09 09:33:30.424717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6136 len:8 PRP1 0x0 PRP2 0x0 00:23:13.653 [2024-12-09 09:33:30.424729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.653 [2024-12-09 09:33:30.424741] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.653 [2024-12-09 09:33:30.424751] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.653 [2024-12-09 09:33:30.424760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:8 PRP1 0x0 PRP2 0x0 00:23:13.653 [2024-12-09 09:33:30.424773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.653 [2024-12-09 09:33:30.424786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.653 [2024-12-09 09:33:30.424795] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.653 [2024-12-09 09:33:30.424805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6152 len:8 PRP1 0x0 PRP2 0x0 00:23:13.653 [2024-12-09 09:33:30.424817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.653 [2024-12-09 09:33:30.424829] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.653 [2024-12-09 09:33:30.424839] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.653 [2024-12-09 09:33:30.424848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6160 len:8 PRP1 0x0 PRP2 0x0 00:23:13.653 [2024-12-09 09:33:30.424864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.653 [2024-12-09 09:33:30.424877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.653 [2024-12-09 09:33:30.424886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.653 [2024-12-09 09:33:30.424896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6168 len:8 PRP1 0x0 PRP2 0x0 00:23:13.653 [2024-12-09 09:33:30.424908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.653 [2024-12-09 09:33:30.424920] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.653 [2024-12-09 09:33:30.424929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.653 [2024-12-09 09:33:30.424938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:8 PRP1 0x0 PRP2 0x0 00:23:13.654 [2024-12-09 09:33:30.424951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.654 [2024-12-09 09:33:30.424963] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.654 [2024-12-09 09:33:30.424972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.654 [2024-12-09 09:33:30.424982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6184 len:8 PRP1 0x0 PRP2 0x0 00:23:13.654 [2024-12-09 09:33:30.424994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.654 [2024-12-09 09:33:30.425006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.654 [2024-12-09 09:33:30.425015] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.654 [2024-12-09 09:33:30.425026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6192 len:8 PRP1 0x0 PRP2 0x0 00:23:13.654 [2024-12-09 09:33:30.425039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.654 [2024-12-09 09:33:30.425051] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.654 [2024-12-09 09:33:30.425060] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.654 [2024-12-09 09:33:30.425069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6200 len:8 PRP1 0x0 PRP2 0x0 00:23:13.654 [2024-12-09 09:33:30.425082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.654 [2024-12-09 09:33:30.425094] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.654 [2024-12-09 09:33:30.425103] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.654 [2024-12-09 09:33:30.425113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:8 PRP1 0x0 PRP2 0x0 00:23:13.654 [2024-12-09 09:33:30.425126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.654 [2024-12-09 09:33:30.425139] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.654 [2024-12-09 09:33:30.425148] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.654 [2024-12-09 09:33:30.425158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6216 len:8 PRP1 0x0 PRP2 0x0 00:23:13.654 [2024-12-09 09:33:30.425170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.654 [2024-12-09 09:33:30.425183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.654 [2024-12-09 09:33:30.425192] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.654 [2024-12-09 09:33:30.425205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6224 len:8 PRP1 0x0 PRP2 0x0 00:23:13.654 [2024-12-09 09:33:30.425217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.654 [2024-12-09 09:33:30.425230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.654 [2024-12-09 09:33:30.425239] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.654 [2024-12-09 09:33:30.425248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6232 len:8 PRP1 0x0 PRP2 0x0 00:23:13.654 [2024-12-09 09:33:30.425261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.654 [2024-12-09 09:33:30.425273] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.654 [2024-12-09 09:33:30.425282] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.654 [2024-12-09 09:33:30.425292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:8 PRP1 0x0 PRP2 0x0 00:23:13.654 [2024-12-09 09:33:30.425304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.654 [2024-12-09 09:33:30.425316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.654 [2024-12-09 09:33:30.425325] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.654 [2024-12-09 09:33:30.425335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6248 len:8 PRP1 0x0 PRP2 0x0 00:23:13.654 [2024-12-09 09:33:30.425347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.654 [2024-12-09 09:33:30.425471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.654 [2024-12-09 09:33:30.425492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.654 [2024-12-09 09:33:30.425508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.654 [2024-12-09 09:33:30.425521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.654 [2024-12-09 09:33:30.425534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.654 [2024-12-09 09:33:30.425546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.654 [2024-12-09 09:33:30.425559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.654 [2024-12-09 09:33:30.425572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.654 [2024-12-09 09:33:30.425586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.654 [2024-12-09 09:33:30.425599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.654 [2024-12-09 09:33:30.425617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e81e0 is same with the state(6) to be set 00:23:13.654 [2024-12-09 09:33:30.426545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:23:13.654 [2024-12-09 09:33:30.426580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e81e0 (9): Bad file descriptor 00:23:13.654 [2024-12-09 09:33:30.426902] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:13.654 [2024-12-09 09:33:30.426938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e81e0 with addr=10.0.0.3, port=4421 00:23:13.654 [2024-12-09 09:33:30.426953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e81e0 is same with the state(6) to be set 00:23:13.654 [2024-12-09 09:33:30.426994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e81e0 (9): Bad file descriptor 00:23:13.654 [2024-12-09 09:33:30.427019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:23:13.654 [2024-12-09 09:33:30.427032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:23:13.654 [2024-12-09 09:33:30.427046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:23:13.654 [2024-12-09 09:33:30.427058] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:23:13.654 [2024-12-09 09:33:30.427072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:23:13.654 8074.60 IOPS, 31.54 MiB/s [2024-12-09T09:33:51.377Z] 8168.75 IOPS, 31.91 MiB/s [2024-12-09T09:33:51.377Z] 8238.57 IOPS, 32.18 MiB/s [2024-12-09T09:33:51.377Z] 8323.03 IOPS, 32.51 MiB/s [2024-12-09T09:33:51.377Z] 8405.00 IOPS, 32.83 MiB/s [2024-12-09T09:33:51.377Z] 8482.08 IOPS, 33.13 MiB/s [2024-12-09T09:33:51.377Z] 8556.17 IOPS, 33.42 MiB/s [2024-12-09T09:33:51.377Z] 8626.74 IOPS, 33.70 MiB/s [2024-12-09T09:33:51.377Z] 8693.84 IOPS, 33.96 MiB/s [2024-12-09T09:33:51.377Z] 8758.07 IOPS, 34.21 MiB/s [2024-12-09T09:33:51.377Z] [2024-12-09 09:33:40.465748] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:23:13.654 8818.58 IOPS, 34.45 MiB/s [2024-12-09T09:33:51.377Z] 8876.78 IOPS, 34.67 MiB/s [2024-12-09T09:33:51.377Z] 8932.26 IOPS, 34.89 MiB/s [2024-12-09T09:33:51.377Z] 8985.25 IOPS, 35.10 MiB/s [2024-12-09T09:33:51.377Z] 9033.55 IOPS, 35.29 MiB/s [2024-12-09T09:33:51.377Z] 9081.52 IOPS, 35.47 MiB/s [2024-12-09T09:33:51.377Z] 9126.98 IOPS, 35.65 MiB/s [2024-12-09T09:33:51.377Z] 9171.58 IOPS, 35.83 MiB/s [2024-12-09T09:33:51.377Z] 9214.26 IOPS, 35.99 MiB/s [2024-12-09T09:33:51.377Z] 9255.63 IOPS, 36.15 MiB/s [2024-12-09T09:33:51.377Z] Received shutdown signal, test time was about 54.696732 seconds 00:23:13.654 00:23:13.654 Latency(us) 00:23:13.654 [2024-12-09T09:33:51.377Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:13.654 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:13.654 Verification LBA range: start 0x0 length 0x4000 00:23:13.654 Nvme0n1 : 54.70 9281.61 36.26 0.00 0.00 13774.93 842.23 7061253.96 00:23:13.654 [2024-12-09T09:33:51.377Z] =================================================================================================================== 00:23:13.654 [2024-12-09T09:33:51.377Z] Total : 9281.61 36.26 0.00 0.00 13774.93 842.23 7061253.96 00:23:13.654 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:13.654 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:23:13.654 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:13.654 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:23:13.654 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:13.654 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:23:13.654 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:13.654 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:23:13.654 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:13.654 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:13.654 rmmod nvme_tcp 00:23:13.654 rmmod nvme_fabrics 00:23:13.654 rmmod nvme_keyring 00:23:13.654 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:13.654 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:23:13.654 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:23:13.654 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 80506 ']' 00:23:13.654 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 80506 00:23:13.654 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 80506 ']' 00:23:13.654 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 80506 00:23:13.655 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:23:13.655 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:13.655 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80506 00:23:13.655 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:13.655 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:13.655 killing process with pid 80506 00:23:13.655 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80506' 00:23:13.655 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 80506 00:23:13.655 09:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 80506 00:23:13.655 09:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:13.655 09:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:13.655 09:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:13.655 09:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:23:13.655 09:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:23:13.655 09:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:23:13.655 09:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:13.655 09:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:13.655 09:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:13.655 09:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:13.655 09:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:13.655 09:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:13.655 09:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:13.655 09:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:13.655 09:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:13.655 09:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:13.655 09:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:13.655 09:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:13.655 09:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:13.655 09:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:13.655 09:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:13.913 09:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:13.913 09:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:13.913 09:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:13.913 09:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:13.913 09:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:13.913 09:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:23:13.913 00:23:13.913 real 1m0.720s 00:23:13.913 user 2m41.626s 00:23:13.913 sys 0m23.769s 00:23:13.913 09:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:13.913 09:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:23:13.913 ************************************ 00:23:13.913 END TEST nvmf_host_multipath 00:23:13.913 ************************************ 00:23:13.913 09:33:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:23:13.913 09:33:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:13.914 09:33:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:13.914 09:33:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.914 ************************************ 00:23:13.914 START TEST nvmf_timeout 00:23:13.914 ************************************ 00:23:13.914 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:23:13.914 * Looking for test storage... 00:23:13.914 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:13.914 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:13.914 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:23:13.914 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:14.172 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:14.172 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:14.172 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:14.172 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:14.172 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:23:14.172 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:14.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.173 --rc genhtml_branch_coverage=1 00:23:14.173 --rc genhtml_function_coverage=1 00:23:14.173 --rc genhtml_legend=1 00:23:14.173 --rc geninfo_all_blocks=1 00:23:14.173 --rc geninfo_unexecuted_blocks=1 00:23:14.173 00:23:14.173 ' 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:14.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.173 --rc genhtml_branch_coverage=1 00:23:14.173 --rc genhtml_function_coverage=1 00:23:14.173 --rc genhtml_legend=1 00:23:14.173 --rc geninfo_all_blocks=1 00:23:14.173 --rc geninfo_unexecuted_blocks=1 00:23:14.173 00:23:14.173 ' 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:14.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.173 --rc genhtml_branch_coverage=1 00:23:14.173 --rc genhtml_function_coverage=1 00:23:14.173 --rc genhtml_legend=1 00:23:14.173 --rc geninfo_all_blocks=1 00:23:14.173 --rc geninfo_unexecuted_blocks=1 00:23:14.173 00:23:14.173 ' 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:14.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.173 --rc genhtml_branch_coverage=1 00:23:14.173 --rc genhtml_function_coverage=1 00:23:14.173 --rc genhtml_legend=1 00:23:14.173 --rc geninfo_all_blocks=1 00:23:14.173 --rc geninfo_unexecuted_blocks=1 00:23:14.173 00:23:14.173 ' 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:14.173 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:14.173 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:14.174 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:14.174 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:14.174 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:14.174 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:14.174 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:14.174 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:14.174 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:14.174 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:14.174 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:14.174 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:14.174 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:14.174 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:14.174 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:14.174 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:14.174 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:14.174 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:14.174 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:14.174 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:14.174 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:14.174 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:14.174 Cannot find device "nvmf_init_br" 00:23:14.174 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:23:14.174 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:14.174 Cannot find device "nvmf_init_br2" 00:23:14.174 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:23:14.174 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:14.174 Cannot find device "nvmf_tgt_br" 00:23:14.174 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:23:14.174 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:14.174 Cannot find device "nvmf_tgt_br2" 00:23:14.174 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:23:14.174 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:14.174 Cannot find device "nvmf_init_br" 00:23:14.174 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:23:14.174 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:14.174 Cannot find device "nvmf_init_br2" 00:23:14.174 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:23:14.174 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:14.174 Cannot find device "nvmf_tgt_br" 00:23:14.174 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:23:14.174 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:14.174 Cannot find device "nvmf_tgt_br2" 00:23:14.174 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:23:14.174 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:14.174 Cannot find device "nvmf_br" 00:23:14.174 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:23:14.174 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:14.454 Cannot find device "nvmf_init_if" 00:23:14.454 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:23:14.454 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:14.454 Cannot find device "nvmf_init_if2" 00:23:14.454 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:23:14.454 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:14.454 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:14.454 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:23:14.454 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:14.454 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:14.454 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:23:14.454 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:14.454 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:14.454 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:14.454 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:14.454 09:33:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:14.454 09:33:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:14.454 09:33:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:14.454 09:33:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:14.454 09:33:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:14.454 09:33:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:14.454 09:33:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:14.454 09:33:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:14.454 09:33:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:14.454 09:33:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:14.454 09:33:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:14.454 09:33:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:14.454 09:33:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:14.454 09:33:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:14.454 09:33:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:14.455 09:33:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:14.455 09:33:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:14.455 09:33:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:14.455 09:33:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:14.455 09:33:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:14.455 09:33:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:14.455 09:33:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:14.455 09:33:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:14.455 09:33:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:14.713 09:33:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:14.713 09:33:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:14.713 09:33:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:14.713 09:33:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:14.713 09:33:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:14.713 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:14.713 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.131 ms 00:23:14.713 00:23:14.713 --- 10.0.0.3 ping statistics --- 00:23:14.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:14.713 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:23:14.713 09:33:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:14.713 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:14.713 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.083 ms 00:23:14.713 00:23:14.713 --- 10.0.0.4 ping statistics --- 00:23:14.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:14.713 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:23:14.713 09:33:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:14.713 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:14.713 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:23:14.714 00:23:14.714 --- 10.0.0.1 ping statistics --- 00:23:14.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:14.714 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:23:14.714 09:33:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:14.714 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:14.714 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:23:14.714 00:23:14.714 --- 10.0.0.2 ping statistics --- 00:23:14.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:14.714 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:23:14.714 09:33:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:14.714 09:33:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:23:14.714 09:33:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:14.714 09:33:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:14.714 09:33:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:14.714 09:33:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:14.714 09:33:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:14.714 09:33:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:14.714 09:33:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:14.714 09:33:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:23:14.714 09:33:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:14.714 09:33:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:14.714 09:33:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:14.714 09:33:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=81722 00:23:14.714 09:33:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:14.714 09:33:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 81722 00:23:14.714 09:33:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 81722 ']' 00:23:14.714 09:33:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:14.714 09:33:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:14.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:14.714 09:33:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:14.714 09:33:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:14.714 09:33:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:14.714 [2024-12-09 09:33:52.328330] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:23:14.714 [2024-12-09 09:33:52.328398] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:14.972 [2024-12-09 09:33:52.482191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:14.972 [2024-12-09 09:33:52.528516] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:14.972 [2024-12-09 09:33:52.528562] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:14.972 [2024-12-09 09:33:52.528572] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:14.972 [2024-12-09 09:33:52.528580] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:14.972 [2024-12-09 09:33:52.528587] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:14.972 [2024-12-09 09:33:52.529533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:14.972 [2024-12-09 09:33:52.529534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:14.972 [2024-12-09 09:33:52.571377] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:15.539 09:33:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:15.539 09:33:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:23:15.539 09:33:53 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:15.539 09:33:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:15.539 09:33:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:15.539 09:33:53 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:15.539 09:33:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:15.539 09:33:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:15.798 [2024-12-09 09:33:53.440775] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:15.798 09:33:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:16.057 Malloc0 00:23:16.057 09:33:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:16.316 09:33:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:16.575 09:33:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:16.835 [2024-12-09 09:33:54.308341] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:16.835 09:33:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=81771 00:23:16.835 09:33:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:23:16.835 09:33:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 81771 /var/tmp/bdevperf.sock 00:23:16.835 09:33:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 81771 ']' 00:23:16.835 09:33:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:16.835 09:33:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:16.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:16.835 09:33:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:16.835 09:33:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:16.835 09:33:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:16.835 [2024-12-09 09:33:54.374666] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:23:16.835 [2024-12-09 09:33:54.374736] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81771 ] 00:23:16.835 [2024-12-09 09:33:54.525704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.095 [2024-12-09 09:33:54.569650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:17.095 [2024-12-09 09:33:54.610829] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:17.662 09:33:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:17.662 09:33:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:23:17.662 09:33:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:17.920 09:33:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:23:18.178 NVMe0n1 00:23:18.178 09:33:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=81789 00:23:18.178 09:33:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:23:18.178 09:33:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:18.178 Running I/O for 10 seconds... 00:23:19.114 09:33:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:19.375 11233.00 IOPS, 43.88 MiB/s [2024-12-09T09:33:57.098Z] [2024-12-09 09:33:56.888129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.375 [2024-12-09 09:33:56.888184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.375 [2024-12-09 09:33:56.888203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.375 [2024-12-09 09:33:56.888212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.375 [2024-12-09 09:33:56.888223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.375 [2024-12-09 09:33:56.888232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.375 [2024-12-09 09:33:56.888243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.375 [2024-12-09 09:33:56.888252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.375 [2024-12-09 09:33:56.888262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.375 [2024-12-09 09:33:56.888270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.375 [2024-12-09 09:33:56.888281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.375 [2024-12-09 09:33:56.888289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.375 [2024-12-09 09:33:56.888299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.375 [2024-12-09 09:33:56.888308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.375 [2024-12-09 09:33:56.888318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.375 [2024-12-09 09:33:56.888326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.375 [2024-12-09 09:33:56.888336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.375 [2024-12-09 09:33:56.888344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.375 [2024-12-09 09:33:56.888355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.375 [2024-12-09 09:33:56.888363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.375 [2024-12-09 09:33:56.888373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.375 [2024-12-09 09:33:56.888381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.375 [2024-12-09 09:33:56.888391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.375 [2024-12-09 09:33:56.888400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.375 [2024-12-09 09:33:56.888410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.375 [2024-12-09 09:33:56.888418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.375 [2024-12-09 09:33:56.888428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.375 [2024-12-09 09:33:56.888436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.375 [2024-12-09 09:33:56.888446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.375 [2024-12-09 09:33:56.888454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.375 [2024-12-09 09:33:56.888475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.375 [2024-12-09 09:33:56.888485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.375 [2024-12-09 09:33:56.888495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.375 [2024-12-09 09:33:56.888503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.375 [2024-12-09 09:33:56.888514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.375 [2024-12-09 09:33:56.888523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.375 [2024-12-09 09:33:56.888533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.375 [2024-12-09 09:33:56.888541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.376 [2024-12-09 09:33:56.888551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.376 [2024-12-09 09:33:56.888560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.376 [2024-12-09 09:33:56.888570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.376 [2024-12-09 09:33:56.888579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.376 [2024-12-09 09:33:56.888589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.376 [2024-12-09 09:33:56.888597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.376 [2024-12-09 09:33:56.888607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.376 [2024-12-09 09:33:56.888616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.376 [2024-12-09 09:33:56.888626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.376 [2024-12-09 09:33:56.888635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.376 [2024-12-09 09:33:56.888652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.376 [2024-12-09 09:33:56.888661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.376 [2024-12-09 09:33:56.888671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.376 [2024-12-09 09:33:56.888679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.376 [2024-12-09 09:33:56.888689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:100024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.376 [2024-12-09 09:33:56.888698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.376 [2024-12-09 09:33:56.888708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:100032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.376 [2024-12-09 09:33:56.888716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.376 [2024-12-09 09:33:56.888726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:100040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.376 [2024-12-09 09:33:56.888735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.376 [2024-12-09 09:33:56.888745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:100048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.376 [2024-12-09 09:33:56.888753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.376 [2024-12-09 09:33:56.888763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.376 [2024-12-09 09:33:56.888772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.376 [2024-12-09 09:33:56.888787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.376 [2024-12-09 09:33:56.888796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.376 [2024-12-09 09:33:56.888806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.376 [2024-12-09 09:33:56.888814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.376 [2024-12-09 09:33:56.888824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:100080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.376 [2024-12-09 09:33:56.888833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.376 [2024-12-09 09:33:56.888843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:100088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.376 [2024-12-09 09:33:56.888851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.376 [2024-12-09 09:33:56.888861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.376 [2024-12-09 09:33:56.888869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.376 [2024-12-09 09:33:56.888879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.376 [2024-12-09 09:33:56.888888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.376 [2024-12-09 09:33:56.888897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.376 [2024-12-09 09:33:56.888906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.376 [2024-12-09 09:33:56.888916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.376 [2024-12-09 09:33:56.888924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.376 [2024-12-09 09:33:56.888934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.376 [2024-12-09 09:33:56.888942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.376 [2024-12-09 09:33:56.888951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.376 [2024-12-09 09:33:56.888960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.376 [2024-12-09 09:33:56.888969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.376 [2024-12-09 09:33:56.888978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.376 [2024-12-09 09:33:56.888988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.376 [2024-12-09 09:33:56.888996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.376 [2024-12-09 09:33:56.889005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.376 [2024-12-09 09:33:56.889014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.376 [2024-12-09 09:33:56.889023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.376 [2024-12-09 09:33:56.889032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.376 [2024-12-09 09:33:56.889042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.376 [2024-12-09 09:33:56.889050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.376 [2024-12-09 09:33:56.889061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.376 [2024-12-09 09:33:56.889069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.376 [2024-12-09 09:33:56.889079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:100736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.376 [2024-12-09 09:33:56.889088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.376 [2024-12-09 09:33:56.889098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.376 [2024-12-09 09:33:56.889106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.376 [2024-12-09 09:33:56.889116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.376 [2024-12-09 09:33:56.889124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.376 [2024-12-09 09:33:56.889134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:100104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.376 [2024-12-09 09:33:56.889142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.376 [2024-12-09 09:33:56.889152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:100112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.376 [2024-12-09 09:33:56.889160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.376 [2024-12-09 09:33:56.889170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:100120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.376 [2024-12-09 09:33:56.889178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.376 [2024-12-09 09:33:56.889188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:100128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.376 [2024-12-09 09:33:56.889196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.376 [2024-12-09 09:33:56.889206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:100136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.376 [2024-12-09 09:33:56.889214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.376 [2024-12-09 09:33:56.889227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:100144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.376 [2024-12-09 09:33:56.889236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.376 [2024-12-09 09:33:56.889245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:100152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.376 [2024-12-09 09:33:56.889254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.377 [2024-12-09 09:33:56.889263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:100160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.377 [2024-12-09 09:33:56.889272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.377 [2024-12-09 09:33:56.889281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.377 [2024-12-09 09:33:56.889290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.377 [2024-12-09 09:33:56.889299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.377 [2024-12-09 09:33:56.889307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.377 [2024-12-09 09:33:56.889317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:100776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.377 [2024-12-09 09:33:56.889326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.377 [2024-12-09 09:33:56.889336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:100784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.377 [2024-12-09 09:33:56.889351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.377 [2024-12-09 09:33:56.889361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.377 [2024-12-09 09:33:56.889370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.377 [2024-12-09 09:33:56.889379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:100800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.377 [2024-12-09 09:33:56.889389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.377 [2024-12-09 09:33:56.889399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:100808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.377 [2024-12-09 09:33:56.889407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.377 [2024-12-09 09:33:56.889417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:100816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.377 [2024-12-09 09:33:56.889426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.377 [2024-12-09 09:33:56.889435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.377 [2024-12-09 09:33:56.889444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.377 [2024-12-09 09:33:56.889454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.377 [2024-12-09 09:33:56.889469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.377 [2024-12-09 09:33:56.889480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:100840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.377 [2024-12-09 09:33:56.889488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.377 [2024-12-09 09:33:56.889499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.377 [2024-12-09 09:33:56.889507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.377 [2024-12-09 09:33:56.889517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.377 [2024-12-09 09:33:56.889525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.377 [2024-12-09 09:33:56.889535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.377 [2024-12-09 09:33:56.889544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.377 [2024-12-09 09:33:56.889553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:100168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.377 [2024-12-09 09:33:56.889562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.377 [2024-12-09 09:33:56.889571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:100176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.377 [2024-12-09 09:33:56.889580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.377 [2024-12-09 09:33:56.889590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:100184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.377 [2024-12-09 09:33:56.889598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.377 [2024-12-09 09:33:56.889608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:100192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.377 [2024-12-09 09:33:56.889616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.377 [2024-12-09 09:33:56.889627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:100200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.377 [2024-12-09 09:33:56.889636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.377 [2024-12-09 09:33:56.889646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:100208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.377 [2024-12-09 09:33:56.889656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.377 [2024-12-09 09:33:56.889666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:100216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.377 [2024-12-09 09:33:56.889675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.377 [2024-12-09 09:33:56.889685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:100224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.377 [2024-12-09 09:33:56.889694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.377 [2024-12-09 09:33:56.889704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:100232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.377 [2024-12-09 09:33:56.889713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.377 [2024-12-09 09:33:56.889722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:100240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.377 [2024-12-09 09:33:56.889731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.377 [2024-12-09 09:33:56.889741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:100248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.377 [2024-12-09 09:33:56.889749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.377 [2024-12-09 09:33:56.889759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:100256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.377 [2024-12-09 09:33:56.889768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.377 [2024-12-09 09:33:56.889777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:100264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.377 [2024-12-09 09:33:56.889786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.377 [2024-12-09 09:33:56.889795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:100272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.377 [2024-12-09 09:33:56.889804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.377 [2024-12-09 09:33:56.889814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:100280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.377 [2024-12-09 09:33:56.889822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.377 [2024-12-09 09:33:56.889832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:100288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.377 [2024-12-09 09:33:56.889840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.377 [2024-12-09 09:33:56.889850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:100296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.377 [2024-12-09 09:33:56.889859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.377 [2024-12-09 09:33:56.889869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:100304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.377 [2024-12-09 09:33:56.889878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.377 [2024-12-09 09:33:56.889888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:100312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.377 [2024-12-09 09:33:56.889896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.377 [2024-12-09 09:33:56.889906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:100320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.377 [2024-12-09 09:33:56.889914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.377 [2024-12-09 09:33:56.889924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:100328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.377 [2024-12-09 09:33:56.889932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.377 [2024-12-09 09:33:56.889943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:100336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.377 [2024-12-09 09:33:56.889953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.377 [2024-12-09 09:33:56.889963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:100344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.377 [2024-12-09 09:33:56.889972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.377 [2024-12-09 09:33:56.889982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:100352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.377 [2024-12-09 09:33:56.889991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.378 [2024-12-09 09:33:56.890008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.378 [2024-12-09 09:33:56.890016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.378 [2024-12-09 09:33:56.890026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:100880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.378 [2024-12-09 09:33:56.890035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.378 [2024-12-09 09:33:56.890052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.378 [2024-12-09 09:33:56.890060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.378 [2024-12-09 09:33:56.890070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.378 [2024-12-09 09:33:56.890079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.378 [2024-12-09 09:33:56.890089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:100904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.378 [2024-12-09 09:33:56.890097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.378 [2024-12-09 09:33:56.890107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.378 [2024-12-09 09:33:56.890116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.378 [2024-12-09 09:33:56.890126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.378 [2024-12-09 09:33:56.890134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.378 [2024-12-09 09:33:56.890144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:100928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.378 [2024-12-09 09:33:56.890153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.378 [2024-12-09 09:33:56.890163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:100360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.378 [2024-12-09 09:33:56.890171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.378 [2024-12-09 09:33:56.890181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:100368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.378 [2024-12-09 09:33:56.890190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.378 [2024-12-09 09:33:56.890200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:100376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.378 [2024-12-09 09:33:56.890208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.378 [2024-12-09 09:33:56.890218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:100384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.378 [2024-12-09 09:33:56.890227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.378 [2024-12-09 09:33:56.890237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:100392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.378 [2024-12-09 09:33:56.890246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.378 [2024-12-09 09:33:56.890256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:100400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.378 [2024-12-09 09:33:56.890267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.378 [2024-12-09 09:33:56.890278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:100408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.378 [2024-12-09 09:33:56.890286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.378 [2024-12-09 09:33:56.890296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:100416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.378 [2024-12-09 09:33:56.890307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.378 [2024-12-09 09:33:56.890316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:100424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.378 [2024-12-09 09:33:56.890325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.378 [2024-12-09 09:33:56.890335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:100432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.378 [2024-12-09 09:33:56.890343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.378 [2024-12-09 09:33:56.890353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:100440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.378 [2024-12-09 09:33:56.890361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.378 [2024-12-09 09:33:56.890371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:100448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.378 [2024-12-09 09:33:56.890380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.378 [2024-12-09 09:33:56.890390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:100456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.378 [2024-12-09 09:33:56.890398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.378 [2024-12-09 09:33:56.890408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.378 [2024-12-09 09:33:56.890416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.378 [2024-12-09 09:33:56.890426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:100472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.378 [2024-12-09 09:33:56.890434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.378 [2024-12-09 09:33:56.890444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e2e690 is same with the state(6) to be set 00:23:19.378 [2024-12-09 09:33:56.890454] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.378 [2024-12-09 09:33:56.890468] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.378 [2024-12-09 09:33:56.890475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100480 len:8 PRP1 0x0 PRP2 0x0 00:23:19.378 [2024-12-09 09:33:56.890484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.378 [2024-12-09 09:33:56.890493] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.378 [2024-12-09 09:33:56.890500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.378 [2024-12-09 09:33:56.890507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100936 len:8 PRP1 0x0 PRP2 0x0 00:23:19.378 [2024-12-09 09:33:56.890516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.378 [2024-12-09 09:33:56.890524] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.378 [2024-12-09 09:33:56.890531] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.378 [2024-12-09 09:33:56.890538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100944 len:8 PRP1 0x0 PRP2 0x0 00:23:19.378 [2024-12-09 09:33:56.890547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.378 [2024-12-09 09:33:56.890557] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.378 [2024-12-09 09:33:56.890564] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.378 [2024-12-09 09:33:56.890571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100952 len:8 PRP1 0x0 PRP2 0x0 00:23:19.378 [2024-12-09 09:33:56.890579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.378 [2024-12-09 09:33:56.890588] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.378 [2024-12-09 09:33:56.890594] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.378 [2024-12-09 09:33:56.890601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100960 len:8 PRP1 0x0 PRP2 0x0 00:23:19.378 [2024-12-09 09:33:56.890610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.378 [2024-12-09 09:33:56.890618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.378 [2024-12-09 09:33:56.890625] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.378 [2024-12-09 09:33:56.890632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100968 len:8 PRP1 0x0 PRP2 0x0 00:23:19.378 [2024-12-09 09:33:56.890640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.378 [2024-12-09 09:33:56.890648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.378 [2024-12-09 09:33:56.890655] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.378 [2024-12-09 09:33:56.890662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100976 len:8 PRP1 0x0 PRP2 0x0 00:23:19.378 [2024-12-09 09:33:56.890670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.378 [2024-12-09 09:33:56.890678] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.378 [2024-12-09 09:33:56.890685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.378 [2024-12-09 09:33:56.890692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100984 len:8 PRP1 0x0 PRP2 0x0 00:23:19.378 [2024-12-09 09:33:56.890700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.378 [2024-12-09 09:33:56.890712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.378 [2024-12-09 09:33:56.890718] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.378 [2024-12-09 09:33:56.890725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100992 len:8 PRP1 0x0 PRP2 0x0 00:23:19.379 [2024-12-09 09:33:56.890734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.379 [2024-12-09 09:33:56.890959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:19.379 [2024-12-09 09:33:56.891018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dcee50 (9): Bad file descriptor 00:23:19.379 [2024-12-09 09:33:56.891103] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.379 [2024-12-09 09:33:56.891117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dcee50 with addr=10.0.0.3, port=4420 00:23:19.379 [2024-12-09 09:33:56.891127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcee50 is same with the state(6) to be set 00:23:19.379 [2024-12-09 09:33:56.891140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dcee50 (9): Bad file descriptor 00:23:19.379 [2024-12-09 09:33:56.891154] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:19.379 [2024-12-09 09:33:56.891162] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:19.379 [2024-12-09 09:33:56.891178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:19.379 [2024-12-09 09:33:56.891187] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:19.379 [2024-12-09 09:33:56.891197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:19.379 09:33:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:23:21.250 6248.50 IOPS, 24.41 MiB/s [2024-12-09T09:33:58.973Z] 4165.67 IOPS, 16.27 MiB/s [2024-12-09T09:33:58.973Z] [2024-12-09 09:33:58.888140] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.250 [2024-12-09 09:33:58.888189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dcee50 with addr=10.0.0.3, port=4420 00:23:21.250 [2024-12-09 09:33:58.888202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcee50 is same with the state(6) to be set 00:23:21.250 [2024-12-09 09:33:58.888223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dcee50 (9): Bad file descriptor 00:23:21.250 [2024-12-09 09:33:58.888239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:21.250 [2024-12-09 09:33:58.888249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:21.250 [2024-12-09 09:33:58.888259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:21.250 [2024-12-09 09:33:58.888268] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:21.250 [2024-12-09 09:33:58.888279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:21.250 09:33:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:23:21.250 09:33:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:21.250 09:33:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:23:21.508 09:33:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:23:21.508 09:33:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:23:21.508 09:33:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:23:21.508 09:33:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:23:21.766 09:33:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:23:21.766 09:33:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:23:23.278 3124.25 IOPS, 12.20 MiB/s [2024-12-09T09:34:01.001Z] 2499.40 IOPS, 9.76 MiB/s [2024-12-09T09:34:01.001Z] [2024-12-09 09:34:00.885260] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:23.278 [2024-12-09 09:34:00.885311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dcee50 with addr=10.0.0.3, port=4420 00:23:23.278 [2024-12-09 09:34:00.885325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcee50 is same with the state(6) to be set 00:23:23.278 [2024-12-09 09:34:00.885346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dcee50 (9): Bad file descriptor 00:23:23.278 [2024-12-09 09:34:00.885363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:23.278 [2024-12-09 09:34:00.885372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:23.278 [2024-12-09 09:34:00.885382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:23.278 [2024-12-09 09:34:00.885392] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:23.278 [2024-12-09 09:34:00.885404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:25.140 2082.83 IOPS, 8.14 MiB/s [2024-12-09T09:34:03.144Z] 1785.29 IOPS, 6.97 MiB/s [2024-12-09T09:34:03.144Z] [2024-12-09 09:34:02.882278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:25.421 [2024-12-09 09:34:02.882329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:25.421 [2024-12-09 09:34:02.882339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:25.421 [2024-12-09 09:34:02.882349] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:23:25.421 [2024-12-09 09:34:02.882359] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:26.364 1562.12 IOPS, 6.10 MiB/s 00:23:26.364 Latency(us) 00:23:26.364 [2024-12-09T09:34:04.087Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:26.364 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:26.364 Verification LBA range: start 0x0 length 0x4000 00:23:26.364 NVMe0n1 : 8.11 1541.85 6.02 15.79 0.00 81948.71 2789.89 7061253.96 00:23:26.364 [2024-12-09T09:34:04.087Z] =================================================================================================================== 00:23:26.364 [2024-12-09T09:34:04.087Z] Total : 1541.85 6.02 15.79 0.00 81948.71 2789.89 7061253.96 00:23:26.364 { 00:23:26.364 "results": [ 00:23:26.364 { 00:23:26.364 "job": "NVMe0n1", 00:23:26.364 "core_mask": "0x4", 00:23:26.364 "workload": "verify", 00:23:26.364 "status": "finished", 00:23:26.364 "verify_range": { 00:23:26.364 "start": 0, 00:23:26.364 "length": 16384 00:23:26.364 }, 00:23:26.364 "queue_depth": 128, 00:23:26.364 "io_size": 4096, 00:23:26.364 "runtime": 8.105222, 00:23:26.364 "iops": 1541.8454917089255, 00:23:26.364 "mibps": 6.02283395198799, 00:23:26.364 "io_failed": 128, 00:23:26.364 "io_timeout": 0, 00:23:26.364 "avg_latency_us": 81948.70616997892, 00:23:26.364 "min_latency_us": 2789.8859437751003, 00:23:26.364 "max_latency_us": 7061253.963052209 00:23:26.364 } 00:23:26.364 ], 00:23:26.365 "core_count": 1 00:23:26.365 } 00:23:26.934 09:34:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:23:26.935 09:34:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:26.935 09:34:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:23:26.935 09:34:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:23:26.935 09:34:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:23:26.935 09:34:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:23:26.935 09:34:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:23:27.194 09:34:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:23:27.194 09:34:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 81789 00:23:27.194 09:34:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 81771 00:23:27.194 09:34:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 81771 ']' 00:23:27.194 09:34:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 81771 00:23:27.194 09:34:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:23:27.195 09:34:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:27.195 09:34:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81771 00:23:27.195 09:34:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:27.195 09:34:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:27.195 killing process with pid 81771 00:23:27.195 09:34:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81771' 00:23:27.195 Received shutdown signal, test time was about 9.049307 seconds 00:23:27.195 00:23:27.195 Latency(us) 00:23:27.195 [2024-12-09T09:34:04.918Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.195 [2024-12-09T09:34:04.918Z] =================================================================================================================== 00:23:27.195 [2024-12-09T09:34:04.918Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:27.195 09:34:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 81771 00:23:27.195 09:34:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 81771 00:23:27.453 09:34:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:27.453 [2024-12-09 09:34:05.165739] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:27.712 09:34:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=81906 00:23:27.712 09:34:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:23:27.712 09:34:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 81906 /var/tmp/bdevperf.sock 00:23:27.712 09:34:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 81906 ']' 00:23:27.712 09:34:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:27.712 09:34:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:27.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:27.712 09:34:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:27.712 09:34:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:27.712 09:34:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:27.712 [2024-12-09 09:34:05.236812] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:23:27.712 [2024-12-09 09:34:05.236888] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81906 ] 00:23:27.712 [2024-12-09 09:34:05.371002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.712 [2024-12-09 09:34:05.418413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:27.969 [2024-12-09 09:34:05.459545] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:28.536 09:34:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:28.536 09:34:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:23:28.536 09:34:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:28.795 09:34:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:23:29.054 NVMe0n1 00:23:29.054 09:34:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=81924 00:23:29.054 09:34:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:29.054 09:34:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:23:29.054 Running I/O for 10 seconds... 00:23:29.992 09:34:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:30.253 9364.00 IOPS, 36.58 MiB/s [2024-12-09T09:34:07.976Z] [2024-12-09 09:34:07.802848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:84224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.253 [2024-12-09 09:34:07.802900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.253 [2024-12-09 09:34:07.802919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.253 [2024-12-09 09:34:07.802928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.253 [2024-12-09 09:34:07.802939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.253 [2024-12-09 09:34:07.802949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.253 [2024-12-09 09:34:07.802959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.253 [2024-12-09 09:34:07.802968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.253 [2024-12-09 09:34:07.802978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:83584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.253 [2024-12-09 09:34:07.802987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.253 [2024-12-09 09:34:07.802997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:83592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.253 [2024-12-09 09:34:07.803006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.253 [2024-12-09 09:34:07.803016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:83600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.253 [2024-12-09 09:34:07.803025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.253 [2024-12-09 09:34:07.803035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:83608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.253 [2024-12-09 09:34:07.803043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.253 [2024-12-09 09:34:07.803054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:83616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.253 [2024-12-09 09:34:07.803063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.253 [2024-12-09 09:34:07.803073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:83624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.253 [2024-12-09 09:34:07.803081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.253 [2024-12-09 09:34:07.803091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.253 [2024-12-09 09:34:07.803101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.253 [2024-12-09 09:34:07.803111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:83640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.253 [2024-12-09 09:34:07.803120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.253 [2024-12-09 09:34:07.803131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:83648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.253 [2024-12-09 09:34:07.803139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.253 [2024-12-09 09:34:07.803149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:83656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.253 [2024-12-09 09:34:07.803158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.253 [2024-12-09 09:34:07.803169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.253 [2024-12-09 09:34:07.803178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.253 [2024-12-09 09:34:07.803188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:83672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.253 [2024-12-09 09:34:07.803197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.253 [2024-12-09 09:34:07.803207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:83680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.253 [2024-12-09 09:34:07.803216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.253 [2024-12-09 09:34:07.803228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:83688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.253 [2024-12-09 09:34:07.803237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.253 [2024-12-09 09:34:07.803249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:83696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.253 [2024-12-09 09:34:07.803258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.253 [2024-12-09 09:34:07.803269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.253 [2024-12-09 09:34:07.803278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.253 [2024-12-09 09:34:07.803288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.253 [2024-12-09 09:34:07.803297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.253 [2024-12-09 09:34:07.803307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:84264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.253 [2024-12-09 09:34:07.803316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.253 [2024-12-09 09:34:07.803326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:84272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.254 [2024-12-09 09:34:07.803335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.254 [2024-12-09 09:34:07.803345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.254 [2024-12-09 09:34:07.803353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.254 [2024-12-09 09:34:07.803365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.254 [2024-12-09 09:34:07.803382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.254 [2024-12-09 09:34:07.803393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:84296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.254 [2024-12-09 09:34:07.803402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.254 [2024-12-09 09:34:07.803412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.254 [2024-12-09 09:34:07.803420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.254 [2024-12-09 09:34:07.803431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.254 [2024-12-09 09:34:07.803440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.254 [2024-12-09 09:34:07.803450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:84320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.254 [2024-12-09 09:34:07.803469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.254 [2024-12-09 09:34:07.803480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.254 [2024-12-09 09:34:07.803488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.254 [2024-12-09 09:34:07.803498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.254 [2024-12-09 09:34:07.803507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.254 [2024-12-09 09:34:07.803518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:84344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.254 [2024-12-09 09:34:07.803526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.254 [2024-12-09 09:34:07.803537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.254 [2024-12-09 09:34:07.803546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.254 [2024-12-09 09:34:07.803557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:84360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.254 [2024-12-09 09:34:07.803566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.254 [2024-12-09 09:34:07.803576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:83712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.254 [2024-12-09 09:34:07.803585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.254 [2024-12-09 09:34:07.803596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:83720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.254 [2024-12-09 09:34:07.803605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.254 [2024-12-09 09:34:07.803614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:83728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.254 [2024-12-09 09:34:07.803623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.254 [2024-12-09 09:34:07.803633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:83736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.254 [2024-12-09 09:34:07.803641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.254 [2024-12-09 09:34:07.803651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:83744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.254 [2024-12-09 09:34:07.803659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.254 [2024-12-09 09:34:07.803669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:83752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.254 [2024-12-09 09:34:07.803678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.254 [2024-12-09 09:34:07.803687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:83760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.254 [2024-12-09 09:34:07.803696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.254 [2024-12-09 09:34:07.803705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:83768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.254 [2024-12-09 09:34:07.803714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.254 [2024-12-09 09:34:07.803724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:84368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.254 [2024-12-09 09:34:07.803732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.254 [2024-12-09 09:34:07.803743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.254 [2024-12-09 09:34:07.803751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.254 [2024-12-09 09:34:07.803761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:84384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.254 [2024-12-09 09:34:07.803769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.254 [2024-12-09 09:34:07.803779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.254 [2024-12-09 09:34:07.803787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.254 [2024-12-09 09:34:07.803797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.254 [2024-12-09 09:34:07.803806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.254 [2024-12-09 09:34:07.803816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.254 [2024-12-09 09:34:07.803824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.254 [2024-12-09 09:34:07.803836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:84416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.254 [2024-12-09 09:34:07.803844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.254 [2024-12-09 09:34:07.803854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:84424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.254 [2024-12-09 09:34:07.803868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.254 [2024-12-09 09:34:07.803877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:84432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.254 [2024-12-09 09:34:07.803886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.254 [2024-12-09 09:34:07.803895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:84440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.254 [2024-12-09 09:34:07.803904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.254 [2024-12-09 09:34:07.803914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.254 [2024-12-09 09:34:07.803922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.254 [2024-12-09 09:34:07.803932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.254 [2024-12-09 09:34:07.803940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.254 [2024-12-09 09:34:07.803950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.254 [2024-12-09 09:34:07.803959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.254 [2024-12-09 09:34:07.803968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:84472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.254 [2024-12-09 09:34:07.803977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.254 [2024-12-09 09:34:07.803987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:83776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.254 [2024-12-09 09:34:07.803995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.254 [2024-12-09 09:34:07.804005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.254 [2024-12-09 09:34:07.804014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.254 [2024-12-09 09:34:07.804024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.254 [2024-12-09 09:34:07.804033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.254 [2024-12-09 09:34:07.804043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:83800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.254 [2024-12-09 09:34:07.804051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.254 [2024-12-09 09:34:07.804062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:83808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.254 [2024-12-09 09:34:07.804070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.254 [2024-12-09 09:34:07.804080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:83816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.254 [2024-12-09 09:34:07.804088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.255 [2024-12-09 09:34:07.804098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.255 [2024-12-09 09:34:07.804107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.255 [2024-12-09 09:34:07.804116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:83832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.255 [2024-12-09 09:34:07.804125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.255 [2024-12-09 09:34:07.804135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.255 [2024-12-09 09:34:07.804144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.255 [2024-12-09 09:34:07.804155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.255 [2024-12-09 09:34:07.804164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.255 [2024-12-09 09:34:07.804174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:83856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.255 [2024-12-09 09:34:07.804182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.255 [2024-12-09 09:34:07.804193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.255 [2024-12-09 09:34:07.804201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.255 [2024-12-09 09:34:07.804211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:83872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.255 [2024-12-09 09:34:07.804220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.255 [2024-12-09 09:34:07.804230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:83880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.255 [2024-12-09 09:34:07.804239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.255 [2024-12-09 09:34:07.804248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:83888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.255 [2024-12-09 09:34:07.804257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.255 [2024-12-09 09:34:07.804267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:83896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.255 [2024-12-09 09:34:07.804275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.255 [2024-12-09 09:34:07.804285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:83904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.255 [2024-12-09 09:34:07.804294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.255 [2024-12-09 09:34:07.804304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.255 [2024-12-09 09:34:07.804313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.255 [2024-12-09 09:34:07.804323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.255 [2024-12-09 09:34:07.804331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.255 [2024-12-09 09:34:07.804341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:83928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.255 [2024-12-09 09:34:07.804349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.255 [2024-12-09 09:34:07.804360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.255 [2024-12-09 09:34:07.804369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.255 [2024-12-09 09:34:07.804379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.255 [2024-12-09 09:34:07.804387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.255 [2024-12-09 09:34:07.804397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:83952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.255 [2024-12-09 09:34:07.804406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.255 [2024-12-09 09:34:07.804415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:83960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.255 [2024-12-09 09:34:07.804425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.255 [2024-12-09 09:34:07.804436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.255 [2024-12-09 09:34:07.804445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.255 [2024-12-09 09:34:07.804455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:84488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.255 [2024-12-09 09:34:07.804471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.255 [2024-12-09 09:34:07.804481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.255 [2024-12-09 09:34:07.804489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.255 [2024-12-09 09:34:07.804500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.255 [2024-12-09 09:34:07.804509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.255 [2024-12-09 09:34:07.804519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.255 [2024-12-09 09:34:07.804528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.255 [2024-12-09 09:34:07.804539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.255 [2024-12-09 09:34:07.804547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.255 [2024-12-09 09:34:07.804557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.255 [2024-12-09 09:34:07.804566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.255 [2024-12-09 09:34:07.804577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:84536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.255 [2024-12-09 09:34:07.804586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.255 [2024-12-09 09:34:07.804596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:83968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.255 [2024-12-09 09:34:07.804604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.255 [2024-12-09 09:34:07.804614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:83976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.255 [2024-12-09 09:34:07.804623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.255 [2024-12-09 09:34:07.804633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:83984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.255 [2024-12-09 09:34:07.804642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.255 [2024-12-09 09:34:07.804652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:83992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.255 [2024-12-09 09:34:07.804660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.255 [2024-12-09 09:34:07.804670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:84000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.255 [2024-12-09 09:34:07.804678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.255 [2024-12-09 09:34:07.804688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:84008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.255 [2024-12-09 09:34:07.804702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.255 [2024-12-09 09:34:07.804712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:84016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.255 [2024-12-09 09:34:07.804721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.255 [2024-12-09 09:34:07.804731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:84024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.255 [2024-12-09 09:34:07.804741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.255 [2024-12-09 09:34:07.804751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:84032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.255 [2024-12-09 09:34:07.804759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.255 [2024-12-09 09:34:07.804769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:84040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.255 [2024-12-09 09:34:07.804778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.255 [2024-12-09 09:34:07.804788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.255 [2024-12-09 09:34:07.804796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.255 [2024-12-09 09:34:07.804806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:84056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.255 [2024-12-09 09:34:07.804821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.255 [2024-12-09 09:34:07.804831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:84064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.255 [2024-12-09 09:34:07.804839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.256 [2024-12-09 09:34:07.804849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:84072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.256 [2024-12-09 09:34:07.804857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.256 [2024-12-09 09:34:07.804867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:84080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.256 [2024-12-09 09:34:07.804875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.256 [2024-12-09 09:34:07.804885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:84088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.256 [2024-12-09 09:34:07.804893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.256 [2024-12-09 09:34:07.804903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.256 [2024-12-09 09:34:07.804912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.256 [2024-12-09 09:34:07.804922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.256 [2024-12-09 09:34:07.804931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.256 [2024-12-09 09:34:07.804941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.256 [2024-12-09 09:34:07.804949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.256 [2024-12-09 09:34:07.804959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.256 [2024-12-09 09:34:07.804967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.256 [2024-12-09 09:34:07.804977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.256 [2024-12-09 09:34:07.804986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.256 [2024-12-09 09:34:07.804996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.256 [2024-12-09 09:34:07.805009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.256 [2024-12-09 09:34:07.805019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.256 [2024-12-09 09:34:07.805027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.256 [2024-12-09 09:34:07.805037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.256 [2024-12-09 09:34:07.805045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.256 [2024-12-09 09:34:07.805055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:84096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.256 [2024-12-09 09:34:07.805064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.256 [2024-12-09 09:34:07.805073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.256 [2024-12-09 09:34:07.805083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.256 [2024-12-09 09:34:07.805092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:84112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.256 [2024-12-09 09:34:07.805101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.256 [2024-12-09 09:34:07.805111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:84120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.256 [2024-12-09 09:34:07.805119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.256 [2024-12-09 09:34:07.805129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:84128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.256 [2024-12-09 09:34:07.805140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.256 [2024-12-09 09:34:07.805151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:84136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.256 [2024-12-09 09:34:07.805160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.256 [2024-12-09 09:34:07.805170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:84144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.256 [2024-12-09 09:34:07.805179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.256 [2024-12-09 09:34:07.805189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:84152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.256 [2024-12-09 09:34:07.805197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.256 [2024-12-09 09:34:07.805207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:84160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.256 [2024-12-09 09:34:07.805215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.256 [2024-12-09 09:34:07.805225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:84168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.256 [2024-12-09 09:34:07.805234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.256 [2024-12-09 09:34:07.805244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:84176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.256 [2024-12-09 09:34:07.805258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.256 [2024-12-09 09:34:07.805268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:84184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.256 [2024-12-09 09:34:07.805277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.256 [2024-12-09 09:34:07.805287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:84192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.256 [2024-12-09 09:34:07.805295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.256 [2024-12-09 09:34:07.805305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:84200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.256 [2024-12-09 09:34:07.805316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.256 [2024-12-09 09:34:07.805327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:84208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.256 [2024-12-09 09:34:07.805335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.256 [2024-12-09 09:34:07.805345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x798690 is same with the state(6) to be set 00:23:30.256 [2024-12-09 09:34:07.805357] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:30.256 [2024-12-09 09:34:07.805363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:30.256 [2024-12-09 09:34:07.805371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84216 len:8 PRP1 0x0 PRP2 0x0 00:23:30.256 [2024-12-09 09:34:07.805379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.256 [2024-12-09 09:34:07.805618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:23:30.256 [2024-12-09 09:34:07.805681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x738e50 (9): Bad file descriptor 00:23:30.256 [2024-12-09 09:34:07.805768] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.256 [2024-12-09 09:34:07.805783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x738e50 with addr=10.0.0.3, port=4420 00:23:30.256 [2024-12-09 09:34:07.805792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x738e50 is same with the state(6) to be set 00:23:30.256 [2024-12-09 09:34:07.805806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x738e50 (9): Bad file descriptor 00:23:30.256 [2024-12-09 09:34:07.805819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:23:30.256 [2024-12-09 09:34:07.805827] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:23:30.256 [2024-12-09 09:34:07.805838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:23:30.256 [2024-12-09 09:34:07.805847] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:23:30.256 [2024-12-09 09:34:07.805857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:23:30.256 09:34:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:23:31.193 5224.00 IOPS, 20.41 MiB/s [2024-12-09T09:34:08.916Z] [2024-12-09 09:34:08.804332] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.193 [2024-12-09 09:34:08.804373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x738e50 with addr=10.0.0.3, port=4420 00:23:31.193 [2024-12-09 09:34:08.804386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x738e50 is same with the state(6) to be set 00:23:31.193 [2024-12-09 09:34:08.804405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x738e50 (9): Bad file descriptor 00:23:31.193 [2024-12-09 09:34:08.804429] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:23:31.193 [2024-12-09 09:34:08.804438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:23:31.193 [2024-12-09 09:34:08.804449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:23:31.193 [2024-12-09 09:34:08.804467] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:23:31.193 [2024-12-09 09:34:08.804478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:23:31.193 09:34:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:31.452 [2024-12-09 09:34:09.012115] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:31.452 09:34:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 81924 00:23:32.301 3482.67 IOPS, 13.60 MiB/s [2024-12-09T09:34:10.024Z] [2024-12-09 09:34:09.821778] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:23:34.174 2612.00 IOPS, 10.20 MiB/s [2024-12-09T09:34:12.893Z] 4278.60 IOPS, 16.71 MiB/s [2024-12-09T09:34:13.830Z] 5621.50 IOPS, 21.96 MiB/s [2024-12-09T09:34:14.767Z] 6579.57 IOPS, 25.70 MiB/s [2024-12-09T09:34:15.703Z] 7256.12 IOPS, 28.34 MiB/s [2024-12-09T09:34:17.079Z] 7561.00 IOPS, 29.54 MiB/s [2024-12-09T09:34:17.079Z] 7803.30 IOPS, 30.48 MiB/s 00:23:39.356 Latency(us) 00:23:39.356 [2024-12-09T09:34:17.079Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:39.356 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:39.356 Verification LBA range: start 0x0 length 0x4000 00:23:39.356 NVMe0n1 : 10.01 7808.69 30.50 0.00 0.00 16367.95 1138.33 3018551.31 00:23:39.356 [2024-12-09T09:34:17.079Z] =================================================================================================================== 00:23:39.356 [2024-12-09T09:34:17.079Z] Total : 7808.69 30.50 0.00 0.00 16367.95 1138.33 3018551.31 00:23:39.356 { 00:23:39.356 "results": [ 00:23:39.356 { 00:23:39.356 "job": "NVMe0n1", 00:23:39.356 "core_mask": "0x4", 00:23:39.356 "workload": "verify", 00:23:39.356 "status": "finished", 00:23:39.356 "verify_range": { 00:23:39.356 "start": 0, 00:23:39.356 "length": 16384 00:23:39.356 }, 00:23:39.356 "queue_depth": 128, 00:23:39.356 "io_size": 4096, 00:23:39.356 "runtime": 10.009489, 00:23:39.356 "iops": 7808.690333742312, 00:23:39.356 "mibps": 30.502696616180906, 00:23:39.356 "io_failed": 0, 00:23:39.356 "io_timeout": 0, 00:23:39.356 "avg_latency_us": 16367.953737669168, 00:23:39.356 "min_latency_us": 1138.3261044176706, 00:23:39.356 "max_latency_us": 3018551.3124497994 00:23:39.356 } 00:23:39.356 ], 00:23:39.356 "core_count": 1 00:23:39.356 } 00:23:39.356 09:34:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=82034 00:23:39.356 09:34:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:39.356 09:34:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:23:39.356 Running I/O for 10 seconds... 00:23:40.295 09:34:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:40.295 8432.00 IOPS, 32.94 MiB/s [2024-12-09T09:34:18.018Z] [2024-12-09 09:34:17.916885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:75528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.295 [2024-12-09 09:34:17.916939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.295 [2024-12-09 09:34:17.916959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:75536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.295 [2024-12-09 09:34:17.916969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.295 [2024-12-09 09:34:17.916980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:75544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.295 [2024-12-09 09:34:17.916989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.295 [2024-12-09 09:34:17.917000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:75552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.296 [2024-12-09 09:34:17.917009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.296 [2024-12-09 09:34:17.917019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:75560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.296 [2024-12-09 09:34:17.917028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.296 [2024-12-09 09:34:17.917038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:75568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.296 [2024-12-09 09:34:17.917047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.296 [2024-12-09 09:34:17.917057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:75576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.296 [2024-12-09 09:34:17.917066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.296 [2024-12-09 09:34:17.917077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:75584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.296 [2024-12-09 09:34:17.917085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.296 [2024-12-09 09:34:17.917095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:75592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.296 [2024-12-09 09:34:17.917104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.296 [2024-12-09 09:34:17.917114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.296 [2024-12-09 09:34:17.917122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.296 [2024-12-09 09:34:17.917132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.296 [2024-12-09 09:34:17.917141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.296 [2024-12-09 09:34:17.917151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:75616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.296 [2024-12-09 09:34:17.917160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.296 [2024-12-09 09:34:17.917169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:75624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.296 [2024-12-09 09:34:17.917178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.296 [2024-12-09 09:34:17.917188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:75632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.296 [2024-12-09 09:34:17.917196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.296 [2024-12-09 09:34:17.917206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:75640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.296 [2024-12-09 09:34:17.917215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.296 [2024-12-09 09:34:17.917225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:75648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.296 [2024-12-09 09:34:17.917233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.296 [2024-12-09 09:34:17.917243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:75656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.296 [2024-12-09 09:34:17.917255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.296 [2024-12-09 09:34:17.917265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:75664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.296 [2024-12-09 09:34:17.917274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.296 [2024-12-09 09:34:17.917284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:75672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.296 [2024-12-09 09:34:17.917293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.296 [2024-12-09 09:34:17.917303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:75680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.296 [2024-12-09 09:34:17.917312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.296 [2024-12-09 09:34:17.917322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:75688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.296 [2024-12-09 09:34:17.917330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.296 [2024-12-09 09:34:17.917341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:75696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.296 [2024-12-09 09:34:17.917350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.296 [2024-12-09 09:34:17.917360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:75704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.296 [2024-12-09 09:34:17.917368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.296 [2024-12-09 09:34:17.917378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:75712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.296 [2024-12-09 09:34:17.917388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.296 [2024-12-09 09:34:17.917407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:75720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.296 [2024-12-09 09:34:17.917416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.296 [2024-12-09 09:34:17.917427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:75728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.296 [2024-12-09 09:34:17.917437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.296 [2024-12-09 09:34:17.917448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.296 [2024-12-09 09:34:17.917457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.296 [2024-12-09 09:34:17.917477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:74744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.296 [2024-12-09 09:34:17.917485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.296 [2024-12-09 09:34:17.917495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:74752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.296 [2024-12-09 09:34:17.917504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.296 [2024-12-09 09:34:17.917515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:74760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.296 [2024-12-09 09:34:17.917523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.296 [2024-12-09 09:34:17.917534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:74768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.296 [2024-12-09 09:34:17.917543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.296 [2024-12-09 09:34:17.917554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:74776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.296 [2024-12-09 09:34:17.917563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.296 [2024-12-09 09:34:17.917574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:74784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.296 [2024-12-09 09:34:17.917583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.296 [2024-12-09 09:34:17.917594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:74792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.296 [2024-12-09 09:34:17.917603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.296 [2024-12-09 09:34:17.917614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:74800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.296 [2024-12-09 09:34:17.917622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.296 [2024-12-09 09:34:17.917633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:74808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.296 [2024-12-09 09:34:17.917641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.296 [2024-12-09 09:34:17.917651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:74816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.296 [2024-12-09 09:34:17.917660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.296 [2024-12-09 09:34:17.917671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:74824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.296 [2024-12-09 09:34:17.917680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.296 [2024-12-09 09:34:17.917691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:74832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.296 [2024-12-09 09:34:17.917700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.296 [2024-12-09 09:34:17.917710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:74840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.296 [2024-12-09 09:34:17.917719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.296 [2024-12-09 09:34:17.917729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:74848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.296 [2024-12-09 09:34:17.917739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.296 [2024-12-09 09:34:17.917749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:75736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.296 [2024-12-09 09:34:17.917757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.296 [2024-12-09 09:34:17.917767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.297 [2024-12-09 09:34:17.917777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.297 [2024-12-09 09:34:17.917787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:74864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.297 [2024-12-09 09:34:17.917795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.297 [2024-12-09 09:34:17.917805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:74872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.297 [2024-12-09 09:34:17.917814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.297 [2024-12-09 09:34:17.917824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:74880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.297 [2024-12-09 09:34:17.917832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.297 [2024-12-09 09:34:17.917842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:74888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.297 [2024-12-09 09:34:17.917850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.297 [2024-12-09 09:34:17.917861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:74896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.297 [2024-12-09 09:34:17.917869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.297 [2024-12-09 09:34:17.917881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:74904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.297 [2024-12-09 09:34:17.917889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.297 [2024-12-09 09:34:17.917899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:74912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.297 [2024-12-09 09:34:17.917907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.297 [2024-12-09 09:34:17.917917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:74920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.297 [2024-12-09 09:34:17.917926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.297 [2024-12-09 09:34:17.917936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:74928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.297 [2024-12-09 09:34:17.917944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.297 [2024-12-09 09:34:17.917954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:74936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.297 [2024-12-09 09:34:17.917963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.297 [2024-12-09 09:34:17.917973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:74944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.297 [2024-12-09 09:34:17.917981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.297 [2024-12-09 09:34:17.917991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.297 [2024-12-09 09:34:17.917999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.297 [2024-12-09 09:34:17.918009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:75744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.297 [2024-12-09 09:34:17.918018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.297 [2024-12-09 09:34:17.918028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:74960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.297 [2024-12-09 09:34:17.918036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.297 [2024-12-09 09:34:17.918052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:74968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.297 [2024-12-09 09:34:17.918061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.297 [2024-12-09 09:34:17.918071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:74976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.297 [2024-12-09 09:34:17.918081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.297 [2024-12-09 09:34:17.918091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:74984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.297 [2024-12-09 09:34:17.918100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.297 [2024-12-09 09:34:17.918110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:74992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.297 [2024-12-09 09:34:17.918118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.297 [2024-12-09 09:34:17.918128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:75000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.297 [2024-12-09 09:34:17.918137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.297 [2024-12-09 09:34:17.918147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:75008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.297 [2024-12-09 09:34:17.918156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.297 [2024-12-09 09:34:17.918167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:75752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.297 [2024-12-09 09:34:17.918175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.297 [2024-12-09 09:34:17.918186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:75016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.297 [2024-12-09 09:34:17.918195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.297 [2024-12-09 09:34:17.918205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:75024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.297 [2024-12-09 09:34:17.918213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.297 [2024-12-09 09:34:17.918224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:75032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.297 [2024-12-09 09:34:17.918232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.297 [2024-12-09 09:34:17.918242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:75040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.297 [2024-12-09 09:34:17.918251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.297 [2024-12-09 09:34:17.918260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:75048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.297 [2024-12-09 09:34:17.918269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.297 [2024-12-09 09:34:17.918279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:75056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.297 [2024-12-09 09:34:17.918287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.297 [2024-12-09 09:34:17.918298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:75064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.297 [2024-12-09 09:34:17.918306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.297 [2024-12-09 09:34:17.918316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:75072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.297 [2024-12-09 09:34:17.918324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.297 [2024-12-09 09:34:17.918334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:75080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.297 [2024-12-09 09:34:17.918343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.297 [2024-12-09 09:34:17.918352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:75088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.297 [2024-12-09 09:34:17.918361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.297 [2024-12-09 09:34:17.918371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:75096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.297 [2024-12-09 09:34:17.918380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.297 [2024-12-09 09:34:17.918390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:75104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.297 [2024-12-09 09:34:17.918399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.297 [2024-12-09 09:34:17.918409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:75112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.297 [2024-12-09 09:34:17.918417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.297 [2024-12-09 09:34:17.918428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:75120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.297 [2024-12-09 09:34:17.918437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.297 [2024-12-09 09:34:17.918447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:75128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.297 [2024-12-09 09:34:17.918455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.297 [2024-12-09 09:34:17.918474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.297 [2024-12-09 09:34:17.918483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.297 [2024-12-09 09:34:17.918494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:75144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.297 [2024-12-09 09:34:17.918503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.297 [2024-12-09 09:34:17.918513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:75152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.298 [2024-12-09 09:34:17.918522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.298 [2024-12-09 09:34:17.918532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:75160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.298 [2024-12-09 09:34:17.918540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.298 [2024-12-09 09:34:17.918551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:75168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.298 [2024-12-09 09:34:17.918559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.298 [2024-12-09 09:34:17.918570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:75176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.298 [2024-12-09 09:34:17.918579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.298 [2024-12-09 09:34:17.918588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:75184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.298 [2024-12-09 09:34:17.918597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.298 [2024-12-09 09:34:17.918607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:75192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.298 [2024-12-09 09:34:17.918616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.298 [2024-12-09 09:34:17.918625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:75200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.298 [2024-12-09 09:34:17.918634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.298 [2024-12-09 09:34:17.918644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:75208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.298 [2024-12-09 09:34:17.918652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.298 [2024-12-09 09:34:17.918662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:75216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.298 [2024-12-09 09:34:17.918670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.298 [2024-12-09 09:34:17.918680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:75224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.298 [2024-12-09 09:34:17.918689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.298 [2024-12-09 09:34:17.918699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:75232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.298 [2024-12-09 09:34:17.918707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.298 [2024-12-09 09:34:17.918717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:75240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.298 [2024-12-09 09:34:17.918726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.298 [2024-12-09 09:34:17.918741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:75248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.298 [2024-12-09 09:34:17.918749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.298 [2024-12-09 09:34:17.918759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:75256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.298 [2024-12-09 09:34:17.918768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.298 [2024-12-09 09:34:17.918777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:75264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.298 [2024-12-09 09:34:17.918786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.298 [2024-12-09 09:34:17.918796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.298 [2024-12-09 09:34:17.918805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.298 [2024-12-09 09:34:17.918815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:75280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.298 [2024-12-09 09:34:17.918824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.298 [2024-12-09 09:34:17.918834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:75288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.298 [2024-12-09 09:34:17.918859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.298 [2024-12-09 09:34:17.918870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.298 [2024-12-09 09:34:17.918878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.298 [2024-12-09 09:34:17.918888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:75304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.298 [2024-12-09 09:34:17.918900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.298 [2024-12-09 09:34:17.918910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:75312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.298 [2024-12-09 09:34:17.918919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.298 [2024-12-09 09:34:17.918929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:75320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.298 [2024-12-09 09:34:17.918938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.298 [2024-12-09 09:34:17.918947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:75328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.298 [2024-12-09 09:34:17.918956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.298 [2024-12-09 09:34:17.918966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:75336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.298 [2024-12-09 09:34:17.918974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.298 [2024-12-09 09:34:17.918984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:75344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.298 [2024-12-09 09:34:17.918992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.298 [2024-12-09 09:34:17.919002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:75352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.298 [2024-12-09 09:34:17.919011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.298 [2024-12-09 09:34:17.919021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:75360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.298 [2024-12-09 09:34:17.919030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.298 [2024-12-09 09:34:17.919039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:75368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.298 [2024-12-09 09:34:17.919048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.298 [2024-12-09 09:34:17.919059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:75376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.298 [2024-12-09 09:34:17.919067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.298 [2024-12-09 09:34:17.919077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:75384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.298 [2024-12-09 09:34:17.919086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.298 [2024-12-09 09:34:17.919095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:75392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.298 [2024-12-09 09:34:17.919104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.298 [2024-12-09 09:34:17.919114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:75400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.298 [2024-12-09 09:34:17.919122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.298 [2024-12-09 09:34:17.919132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:75408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.298 [2024-12-09 09:34:17.919141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.298 [2024-12-09 09:34:17.919151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:75416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.298 [2024-12-09 09:34:17.919161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.298 [2024-12-09 09:34:17.919170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:75424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.298 [2024-12-09 09:34:17.919179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.298 [2024-12-09 09:34:17.919189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:75432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.298 [2024-12-09 09:34:17.919197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.298 [2024-12-09 09:34:17.919207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:75440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.298 [2024-12-09 09:34:17.919216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.298 [2024-12-09 09:34:17.919226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:75448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.298 [2024-12-09 09:34:17.919235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.298 [2024-12-09 09:34:17.919245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:75456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.298 [2024-12-09 09:34:17.919253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.298 [2024-12-09 09:34:17.919263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:75464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.298 [2024-12-09 09:34:17.919272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.299 [2024-12-09 09:34:17.919281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:75472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.299 [2024-12-09 09:34:17.919290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.299 [2024-12-09 09:34:17.919300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:75480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.299 [2024-12-09 09:34:17.919309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.299 [2024-12-09 09:34:17.919319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:75488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.299 [2024-12-09 09:34:17.919328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.299 [2024-12-09 09:34:17.919338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:75496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.299 [2024-12-09 09:34:17.919346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.299 [2024-12-09 09:34:17.919357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:75504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.299 [2024-12-09 09:34:17.919366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.299 [2024-12-09 09:34:17.919376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:75512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.299 [2024-12-09 09:34:17.919384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.299 [2024-12-09 09:34:17.919394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7971b0 is same with the state(6) to be set 00:23:40.299 [2024-12-09 09:34:17.919406] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.299 [2024-12-09 09:34:17.919413] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.299 [2024-12-09 09:34:17.919420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75520 len:8 PRP1 0x0 PRP2 0x0 00:23:40.299 [2024-12-09 09:34:17.919428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.299 [2024-12-09 09:34:17.919649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:40.299 [2024-12-09 09:34:17.919723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x738e50 (9): Bad file descriptor 00:23:40.299 [2024-12-09 09:34:17.919803] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:40.299 [2024-12-09 09:34:17.919819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x738e50 with addr=10.0.0.3, port=4420 00:23:40.299 [2024-12-09 09:34:17.919828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x738e50 is same with the state(6) to be set 00:23:40.299 [2024-12-09 09:34:17.919841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x738e50 (9): Bad file descriptor 00:23:40.299 [2024-12-09 09:34:17.919855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:23:40.299 [2024-12-09 09:34:17.919863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:23:40.299 [2024-12-09 09:34:17.919874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:40.299 [2024-12-09 09:34:17.919883] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:23:40.299 [2024-12-09 09:34:17.919893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:40.299 09:34:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:23:41.234 4671.00 IOPS, 18.25 MiB/s [2024-12-09T09:34:18.957Z] [2024-12-09 09:34:18.918374] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:41.234 [2024-12-09 09:34:18.918427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x738e50 with addr=10.0.0.3, port=4420 00:23:41.234 [2024-12-09 09:34:18.918440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x738e50 is same with the state(6) to be set 00:23:41.234 [2024-12-09 09:34:18.918468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x738e50 (9): Bad file descriptor 00:23:41.234 [2024-12-09 09:34:18.918484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:23:41.234 [2024-12-09 09:34:18.918493] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:23:41.234 [2024-12-09 09:34:18.918503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:41.234 [2024-12-09 09:34:18.918514] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:23:41.234 [2024-12-09 09:34:18.918524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:42.428 3114.00 IOPS, 12.16 MiB/s [2024-12-09T09:34:20.151Z] [2024-12-09 09:34:19.916998] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.428 [2024-12-09 09:34:19.917046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x738e50 with addr=10.0.0.3, port=4420 00:23:42.428 [2024-12-09 09:34:19.917059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x738e50 is same with the state(6) to be set 00:23:42.428 [2024-12-09 09:34:19.917078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x738e50 (9): Bad file descriptor 00:23:42.428 [2024-12-09 09:34:19.917094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:23:42.428 [2024-12-09 09:34:19.917102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:23:42.428 [2024-12-09 09:34:19.917113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:42.428 [2024-12-09 09:34:19.917122] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:23:42.428 [2024-12-09 09:34:19.917132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:43.363 2335.50 IOPS, 9.12 MiB/s [2024-12-09T09:34:21.086Z] [2024-12-09 09:34:20.918141] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:43.363 [2024-12-09 09:34:20.918191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x738e50 with addr=10.0.0.3, port=4420 00:23:43.363 [2024-12-09 09:34:20.918205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x738e50 is same with the state(6) to be set 00:23:43.363 [2024-12-09 09:34:20.918390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x738e50 (9): Bad file descriptor 00:23:43.363 [2024-12-09 09:34:20.918581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:23:43.363 [2024-12-09 09:34:20.918593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:23:43.363 [2024-12-09 09:34:20.918604] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:43.363 [2024-12-09 09:34:20.918613] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:23:43.363 [2024-12-09 09:34:20.918624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:43.364 09:34:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:43.624 [2024-12-09 09:34:21.131503] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:43.624 09:34:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 82034 00:23:44.454 1868.40 IOPS, 7.30 MiB/s [2024-12-09T09:34:22.177Z] [2024-12-09 09:34:21.941747] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:23:46.324 3033.50 IOPS, 11.85 MiB/s [2024-12-09T09:34:24.983Z] 4161.86 IOPS, 16.26 MiB/s [2024-12-09T09:34:25.918Z] 5008.62 IOPS, 19.56 MiB/s [2024-12-09T09:34:26.853Z] 5810.33 IOPS, 22.70 MiB/s [2024-12-09T09:34:26.853Z] 6453.70 IOPS, 25.21 MiB/s 00:23:49.130 Latency(us) 00:23:49.130 [2024-12-09T09:34:26.853Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.130 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:49.130 Verification LBA range: start 0x0 length 0x4000 00:23:49.131 NVMe0n1 : 10.01 6458.80 25.23 5119.75 0.00 11035.91 486.91 3018551.31 00:23:49.131 [2024-12-09T09:34:26.854Z] =================================================================================================================== 00:23:49.131 [2024-12-09T09:34:26.854Z] Total : 6458.80 25.23 5119.75 0.00 11035.91 0.00 3018551.31 00:23:49.131 { 00:23:49.131 "results": [ 00:23:49.131 { 00:23:49.131 "job": "NVMe0n1", 00:23:49.131 "core_mask": "0x4", 00:23:49.131 "workload": "verify", 00:23:49.131 "status": "finished", 00:23:49.131 "verify_range": { 00:23:49.131 "start": 0, 00:23:49.131 "length": 16384 00:23:49.131 }, 00:23:49.131 "queue_depth": 128, 00:23:49.131 "io_size": 4096, 00:23:49.131 "runtime": 10.006344, 00:23:49.131 "iops": 6458.80253567137, 00:23:49.131 "mibps": 25.229697404966288, 00:23:49.131 "io_failed": 51230, 00:23:49.131 "io_timeout": 0, 00:23:49.131 "avg_latency_us": 11035.908354217152, 00:23:49.131 "min_latency_us": 486.9140562248996, 00:23:49.131 "max_latency_us": 3018551.3124497994 00:23:49.131 } 00:23:49.131 ], 00:23:49.131 "core_count": 1 00:23:49.131 } 00:23:49.131 09:34:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 81906 00:23:49.131 09:34:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 81906 ']' 00:23:49.131 09:34:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 81906 00:23:49.131 09:34:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:23:49.131 09:34:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:49.131 09:34:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81906 00:23:49.388 killing process with pid 81906 00:23:49.388 Received shutdown signal, test time was about 10.000000 seconds 00:23:49.388 00:23:49.388 Latency(us) 00:23:49.388 [2024-12-09T09:34:27.111Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.388 [2024-12-09T09:34:27.111Z] =================================================================================================================== 00:23:49.388 [2024-12-09T09:34:27.111Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:49.388 09:34:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:49.388 09:34:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:49.388 09:34:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81906' 00:23:49.388 09:34:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 81906 00:23:49.388 09:34:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 81906 00:23:49.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:49.388 09:34:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=82148 00:23:49.388 09:34:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 82148 /var/tmp/bdevperf.sock 00:23:49.388 09:34:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82148 ']' 00:23:49.388 09:34:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:49.388 09:34:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:49.388 09:34:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:49.388 09:34:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:49.388 09:34:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:49.388 09:34:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:23:49.388 [2024-12-09 09:34:27.062790] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:23:49.388 [2024-12-09 09:34:27.062871] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82148 ] 00:23:49.646 [2024-12-09 09:34:27.211583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.646 [2024-12-09 09:34:27.259862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:49.646 [2024-12-09 09:34:27.300996] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:50.582 09:34:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:50.582 09:34:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:23:50.582 09:34:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82148 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:23:50.582 09:34:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=82164 00:23:50.582 09:34:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:23:50.582 09:34:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:23:50.841 NVMe0n1 00:23:50.841 09:34:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=82206 00:23:50.841 09:34:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:50.841 09:34:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:23:50.841 Running I/O for 10 seconds... 00:23:51.776 09:34:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:52.040 19558.00 IOPS, 76.40 MiB/s [2024-12-09T09:34:29.763Z] [2024-12-09 09:34:29.625956] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.040 [2024-12-09 09:34:29.626008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.040 [2024-12-09 09:34:29.626019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.040 [2024-12-09 09:34:29.626027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.040 [2024-12-09 09:34:29.626037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.040 [2024-12-09 09:34:29.626055] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.040 [2024-12-09 09:34:29.626063] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.040 [2024-12-09 09:34:29.626071] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.040 [2024-12-09 09:34:29.626079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.040 [2024-12-09 09:34:29.626087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.040 [2024-12-09 09:34:29.626096] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.040 [2024-12-09 09:34:29.626104] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.040 [2024-12-09 09:34:29.626111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.040 [2024-12-09 09:34:29.626119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.040 [2024-12-09 09:34:29.626127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.040 [2024-12-09 09:34:29.626135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.040 [2024-12-09 09:34:29.626143] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.040 [2024-12-09 09:34:29.626151] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.040 [2024-12-09 09:34:29.626159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.040 [2024-12-09 09:34:29.626167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.040 [2024-12-09 09:34:29.626175] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.040 [2024-12-09 09:34:29.626183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.040 [2024-12-09 09:34:29.626191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.040 [2024-12-09 09:34:29.626199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.040 [2024-12-09 09:34:29.626207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.040 [2024-12-09 09:34:29.626214] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.040 [2024-12-09 09:34:29.626222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.040 [2024-12-09 09:34:29.626230] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.040 [2024-12-09 09:34:29.626238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.040 [2024-12-09 09:34:29.626246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.040 [2024-12-09 09:34:29.626255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.040 [2024-12-09 09:34:29.626264] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.040 [2024-12-09 09:34:29.626272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.040 [2024-12-09 09:34:29.626280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.040 [2024-12-09 09:34:29.626290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.040 [2024-12-09 09:34:29.626298] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.040 [2024-12-09 09:34:29.626306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.040 [2024-12-09 09:34:29.626314] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.040 [2024-12-09 09:34:29.626330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.040 [2024-12-09 09:34:29.626338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.040 [2024-12-09 09:34:29.626347] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.040 [2024-12-09 09:34:29.626355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.040 [2024-12-09 09:34:29.626363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.040 [2024-12-09 09:34:29.626371] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.040 [2024-12-09 09:34:29.626379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626394] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626442] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626533] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626558] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626713] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626745] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626769] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626777] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626864] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626872] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626902] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626934] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626957] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.626996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.627004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.627012] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.627020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.627028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.627036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117de10 is same with the state(6) to be set 00:23:52.041 [2024-12-09 09:34:29.627092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.041 [2024-12-09 09:34:29.627121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.041 [2024-12-09 09:34:29.627141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.041 [2024-12-09 09:34:29.627151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.041 [2024-12-09 09:34:29.627162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:114880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.041 [2024-12-09 09:34:29.627171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.041 [2024-12-09 09:34:29.627181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:109176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.041 [2024-12-09 09:34:29.627190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.041 [2024-12-09 09:34:29.627200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:94088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.041 [2024-12-09 09:34:29.627209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.041 [2024-12-09 09:34:29.627219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:48088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.041 [2024-12-09 09:34:29.627227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.041 [2024-12-09 09:34:29.627237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:51072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.041 [2024-12-09 09:34:29.627246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.041 [2024-12-09 09:34:29.627256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:31056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.041 [2024-12-09 09:34:29.627265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.041 [2024-12-09 09:34:29.627275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:46312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.041 [2024-12-09 09:34:29.627284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.041 [2024-12-09 09:34:29.627294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:112848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.041 [2024-12-09 09:34:29.627303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.041 [2024-12-09 09:34:29.627313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:43624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.041 [2024-12-09 09:34:29.627321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.041 [2024-12-09 09:34:29.627331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:82904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.041 [2024-12-09 09:34:29.627340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.041 [2024-12-09 09:34:29.627349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:78216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.041 [2024-12-09 09:34:29.627358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.041 [2024-12-09 09:34:29.627370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.041 [2024-12-09 09:34:29.627379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.041 [2024-12-09 09:34:29.627389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:103184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.041 [2024-12-09 09:34:29.627397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.041 [2024-12-09 09:34:29.627407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.041 [2024-12-09 09:34:29.627416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.041 [2024-12-09 09:34:29.627426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.041 [2024-12-09 09:34:29.627437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.041 [2024-12-09 09:34:29.627447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.041 [2024-12-09 09:34:29.627456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.041 [2024-12-09 09:34:29.627476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:8304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.041 [2024-12-09 09:34:29.627484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.041 [2024-12-09 09:34:29.627495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:43776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.041 [2024-12-09 09:34:29.627503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.041 [2024-12-09 09:34:29.627514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.041 [2024-12-09 09:34:29.627522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.041 [2024-12-09 09:34:29.627532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:104184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.041 [2024-12-09 09:34:29.627541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.041 [2024-12-09 09:34:29.627551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.041 [2024-12-09 09:34:29.627559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.041 [2024-12-09 09:34:29.627570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:52968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.041 [2024-12-09 09:34:29.627578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.041 [2024-12-09 09:34:29.627589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.041 [2024-12-09 09:34:29.627597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.041 [2024-12-09 09:34:29.627608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:33672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.627617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.627627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:61664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.627636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.627646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.627655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.627665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.627673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.627683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.627691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.627702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:93568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.627710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.627720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:63696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.627729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.627739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:30048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.627748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.627758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:43536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.627767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.627776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:66024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.627785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.627795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:44832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.627804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.627814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.627822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.627832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:106952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.627840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.627851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:92312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.627860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.627870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:58960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.627878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.627888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:125064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.627897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.627907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:100400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.627915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.627929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.627938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.627948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.627956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.627966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:114376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.627975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.627985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:75752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.627993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.628003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:53872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.628015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.628025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:86160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.628033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.628043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:38584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.628052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.628062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:77072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.628070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.628080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.628088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.628098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:73688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.628107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.628117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:117776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.628125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.628135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:71000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.628143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.628154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:126248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.628163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.628174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:59552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.628182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.628192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:43024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.628201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.628211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.628219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.628229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.628237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.628248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.628256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.628266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:29064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.628274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.628284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:97584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.628292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.628302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:103144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.628312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.628322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:104344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.628331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.628341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:48304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.628350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.628360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.628368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.628378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:126504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.628388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.628400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:26152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.628409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.628419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:31576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.628427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.628437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:92816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.628446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.628456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:128456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.628472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.628482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.628490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.628500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:85184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.628509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.628518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.628527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.628537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:124384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.628545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.628555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:97584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.628564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.628574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:11680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.628582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.628592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:73896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.628600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.628610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:114664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.628620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.628630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:95992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.628639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.628649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:51288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.628658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.628667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:72320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.628676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.628686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.628696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.628706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:27392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.628714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.628724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.628732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.628742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:80520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.628751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.628760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:95088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.628769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.628778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:111016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.042 [2024-12-09 09:34:29.628787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.042 [2024-12-09 09:34:29.628797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:112528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.043 [2024-12-09 09:34:29.628805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.043 [2024-12-09 09:34:29.628815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:125944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.043 [2024-12-09 09:34:29.628824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.043 [2024-12-09 09:34:29.628833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:121704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.043 [2024-12-09 09:34:29.628842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.043 [2024-12-09 09:34:29.628852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:85176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.043 [2024-12-09 09:34:29.628860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.043 [2024-12-09 09:34:29.628870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:54312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.043 [2024-12-09 09:34:29.628879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.043 [2024-12-09 09:34:29.628889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.043 [2024-12-09 09:34:29.628897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.043 [2024-12-09 09:34:29.628907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:100976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.043 [2024-12-09 09:34:29.628917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.043 [2024-12-09 09:34:29.628926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:71608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.043 [2024-12-09 09:34:29.628935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.043 [2024-12-09 09:34:29.628945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:47776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.043 [2024-12-09 09:34:29.628953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.043 [2024-12-09 09:34:29.628963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:30440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.043 [2024-12-09 09:34:29.628972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.043 [2024-12-09 09:34:29.628981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:60792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.043 [2024-12-09 09:34:29.628991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.043 [2024-12-09 09:34:29.629001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:71552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.043 [2024-12-09 09:34:29.629010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.043 [2024-12-09 09:34:29.629019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:51368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.043 [2024-12-09 09:34:29.629028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.043 [2024-12-09 09:34:29.629038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:7728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.043 [2024-12-09 09:34:29.629047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.043 [2024-12-09 09:34:29.629056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.043 [2024-12-09 09:34:29.629065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.043 [2024-12-09 09:34:29.629075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:43624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.043 [2024-12-09 09:34:29.629083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.043 [2024-12-09 09:34:29.629093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:27024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.043 [2024-12-09 09:34:29.629102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.043 [2024-12-09 09:34:29.629111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:111216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.043 [2024-12-09 09:34:29.629120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.043 [2024-12-09 09:34:29.629130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.043 [2024-12-09 09:34:29.629138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.043 [2024-12-09 09:34:29.629148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:120928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.043 [2024-12-09 09:34:29.629157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.043 [2024-12-09 09:34:29.629166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:34328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.043 [2024-12-09 09:34:29.629175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.043 [2024-12-09 09:34:29.629185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.043 [2024-12-09 09:34:29.629193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.043 [2024-12-09 09:34:29.629203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:41856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.043 [2024-12-09 09:34:29.629212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.043 [2024-12-09 09:34:29.629222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:41672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.043 [2024-12-09 09:34:29.629231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.043 [2024-12-09 09:34:29.629245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:62600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.043 [2024-12-09 09:34:29.629253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.043 [2024-12-09 09:34:29.629263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.043 [2024-12-09 09:34:29.629272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.043 [2024-12-09 09:34:29.629282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.043 [2024-12-09 09:34:29.629291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.043 [2024-12-09 09:34:29.629301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:38024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.043 [2024-12-09 09:34:29.629309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.043 [2024-12-09 09:34:29.629319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:26792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.043 [2024-12-09 09:34:29.629328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.043 [2024-12-09 09:34:29.629337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.043 [2024-12-09 09:34:29.629346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.043 [2024-12-09 09:34:29.629355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:53696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.043 [2024-12-09 09:34:29.629364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.043 [2024-12-09 09:34:29.629374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:94720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.043 [2024-12-09 09:34:29.629382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.043 [2024-12-09 09:34:29.629392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:58288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.043 [2024-12-09 09:34:29.629400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.043 [2024-12-09 09:34:29.629410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:126952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.043 [2024-12-09 09:34:29.629419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.043 [2024-12-09 09:34:29.629428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:70232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.043 [2024-12-09 09:34:29.629437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.043 [2024-12-09 09:34:29.629447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:76056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.043 [2024-12-09 09:34:29.629455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.043 [2024-12-09 09:34:29.629472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:74864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.043 [2024-12-09 09:34:29.629482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.043 [2024-12-09 09:34:29.629491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:88672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.043 [2024-12-09 09:34:29.629500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.043 [2024-12-09 09:34:29.629510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:52.043 [2024-12-09 09:34:29.629519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.043 [2024-12-09 09:34:29.629529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fea920 is same with the state(6) to be set 00:23:52.043 [2024-12-09 09:34:29.629539] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:52.043 [2024-12-09 09:34:29.629547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:52.043 [2024-12-09 09:34:29.629554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106800 len:8 PRP1 0x0 PRP2 0x0 00:23:52.043 [2024-12-09 09:34:29.629563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.043 [2024-12-09 09:34:29.629819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:52.043 [2024-12-09 09:34:29.629882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7de50 (9): Bad file descriptor 00:23:52.043 [2024-12-09 09:34:29.629965] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:52.043 [2024-12-09 09:34:29.629979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7de50 with addr=10.0.0.3, port=4420 00:23:52.043 [2024-12-09 09:34:29.629988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7de50 is same with the state(6) to be set 00:23:52.043 [2024-12-09 09:34:29.630002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7de50 (9): Bad file descriptor 00:23:52.043 [2024-12-09 09:34:29.630015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:23:52.043 [2024-12-09 09:34:29.630023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:23:52.043 [2024-12-09 09:34:29.630033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:52.043 [2024-12-09 09:34:29.630042] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:23:52.043 [2024-12-09 09:34:29.630061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:52.043 09:34:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 82206 00:23:53.915 10796.00 IOPS, 42.17 MiB/s [2024-12-09T09:34:31.638Z] 7197.33 IOPS, 28.11 MiB/s [2024-12-09T09:34:31.638Z] [2024-12-09 09:34:31.627026] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.915 [2024-12-09 09:34:31.627087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7de50 with addr=10.0.0.3, port=4420 00:23:53.915 [2024-12-09 09:34:31.627100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7de50 is same with the state(6) to be set 00:23:53.915 [2024-12-09 09:34:31.627119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7de50 (9): Bad file descriptor 00:23:53.915 [2024-12-09 09:34:31.627146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:23:53.915 [2024-12-09 09:34:31.627156] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:23:53.915 [2024-12-09 09:34:31.627167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:53.915 [2024-12-09 09:34:31.627178] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:23:53.915 [2024-12-09 09:34:31.627189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:56.232 5398.00 IOPS, 21.09 MiB/s [2024-12-09T09:34:33.955Z] 4318.40 IOPS, 16.87 MiB/s [2024-12-09T09:34:33.955Z] [2024-12-09 09:34:33.624117] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.232 [2024-12-09 09:34:33.624177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7de50 with addr=10.0.0.3, port=4420 00:23:56.232 [2024-12-09 09:34:33.624191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7de50 is same with the state(6) to be set 00:23:56.232 [2024-12-09 09:34:33.624213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7de50 (9): Bad file descriptor 00:23:56.232 [2024-12-09 09:34:33.624230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:23:56.232 [2024-12-09 09:34:33.624239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:23:56.232 [2024-12-09 09:34:33.624250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:56.232 [2024-12-09 09:34:33.624261] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:23:56.232 [2024-12-09 09:34:33.624272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:58.177 3598.67 IOPS, 14.06 MiB/s [2024-12-09T09:34:35.900Z] 3084.57 IOPS, 12.05 MiB/s [2024-12-09T09:34:35.900Z] [2024-12-09 09:34:35.621130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:58.177 [2024-12-09 09:34:35.621169] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:23:58.177 [2024-12-09 09:34:35.621178] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:23:58.177 [2024-12-09 09:34:35.621188] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:23:58.177 [2024-12-09 09:34:35.621199] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:23:59.132 2699.00 IOPS, 10.54 MiB/s 00:23:59.132 Latency(us) 00:23:59.132 [2024-12-09T09:34:36.855Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:59.132 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:23:59.132 NVMe0n1 : 8.11 2662.11 10.40 15.78 0.00 47927.02 6185.12 7061253.96 00:23:59.132 [2024-12-09T09:34:36.855Z] =================================================================================================================== 00:23:59.132 [2024-12-09T09:34:36.855Z] Total : 2662.11 10.40 15.78 0.00 47927.02 6185.12 7061253.96 00:23:59.132 { 00:23:59.132 "results": [ 00:23:59.132 { 00:23:59.132 "job": "NVMe0n1", 00:23:59.132 "core_mask": "0x4", 00:23:59.132 "workload": "randread", 00:23:59.132 "status": "finished", 00:23:59.132 "queue_depth": 128, 00:23:59.132 "io_size": 4096, 00:23:59.132 "runtime": 8.110856, 00:23:59.132 "iops": 2662.1111261252818, 00:23:59.132 "mibps": 10.398871586426882, 00:23:59.132 "io_failed": 128, 00:23:59.132 "io_timeout": 0, 00:23:59.132 "avg_latency_us": 47927.024956991874, 00:23:59.132 "min_latency_us": 6185.1244979919675, 00:23:59.132 "max_latency_us": 7061253.963052209 00:23:59.132 } 00:23:59.132 ], 00:23:59.132 "core_count": 1 00:23:59.132 } 00:23:59.132 09:34:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:59.132 Attaching 5 probes... 00:23:59.132 1136.722574: reset bdev controller NVMe0 00:23:59.132 1136.823965: reconnect bdev controller NVMe0 00:23:59.132 3133.817747: reconnect delay bdev controller NVMe0 00:23:59.132 3133.841736: reconnect bdev controller NVMe0 00:23:59.132 5130.917686: reconnect delay bdev controller NVMe0 00:23:59.132 5130.939605: reconnect bdev controller NVMe0 00:23:59.132 7128.018197: reconnect delay bdev controller NVMe0 00:23:59.132 7128.035994: reconnect bdev controller NVMe0 00:23:59.132 09:34:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:23:59.132 09:34:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:23:59.132 09:34:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 82164 00:23:59.132 09:34:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:59.132 09:34:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 82148 00:23:59.132 09:34:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82148 ']' 00:23:59.132 09:34:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82148 00:23:59.132 09:34:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:23:59.132 09:34:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:59.132 09:34:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82148 00:23:59.132 killing process with pid 82148 00:23:59.132 Received shutdown signal, test time was about 8.201351 seconds 00:23:59.132 00:23:59.132 Latency(us) 00:23:59.132 [2024-12-09T09:34:36.855Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:59.132 [2024-12-09T09:34:36.855Z] =================================================================================================================== 00:23:59.132 [2024-12-09T09:34:36.855Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:59.132 09:34:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:59.132 09:34:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:59.132 09:34:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82148' 00:23:59.132 09:34:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82148 00:23:59.132 09:34:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82148 00:23:59.392 09:34:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:59.392 09:34:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:23:59.392 09:34:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:23:59.392 09:34:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:59.392 09:34:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:23:59.651 09:34:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:59.651 09:34:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:23:59.651 09:34:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:59.651 09:34:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:59.651 rmmod nvme_tcp 00:23:59.651 rmmod nvme_fabrics 00:23:59.651 rmmod nvme_keyring 00:23:59.651 09:34:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:59.651 09:34:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:23:59.651 09:34:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:23:59.651 09:34:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 81722 ']' 00:23:59.651 09:34:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 81722 00:23:59.651 09:34:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 81722 ']' 00:23:59.651 09:34:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 81722 00:23:59.651 09:34:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:23:59.651 09:34:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:59.651 09:34:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81722 00:23:59.651 killing process with pid 81722 00:23:59.651 09:34:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:59.651 09:34:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:59.651 09:34:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81722' 00:23:59.651 09:34:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 81722 00:23:59.651 09:34:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 81722 00:23:59.910 09:34:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:59.910 09:34:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:59.910 09:34:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:59.910 09:34:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:23:59.910 09:34:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:59.910 09:34:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:23:59.910 09:34:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:23:59.910 09:34:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:59.910 09:34:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:59.910 09:34:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:59.910 09:34:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:59.910 09:34:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:59.910 09:34:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:59.910 09:34:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:59.910 09:34:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:59.910 09:34:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:59.910 09:34:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:59.910 09:34:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:59.910 09:34:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:59.910 09:34:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:59.910 09:34:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:00.168 09:34:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:00.168 09:34:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:00.168 09:34:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.169 09:34:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:00.169 09:34:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.169 09:34:37 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:24:00.169 00:24:00.169 real 0m46.218s 00:24:00.169 user 2m12.639s 00:24:00.169 sys 0m6.863s 00:24:00.169 09:34:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:00.169 09:34:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:00.169 ************************************ 00:24:00.169 END TEST nvmf_timeout 00:24:00.169 ************************************ 00:24:00.169 09:34:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:24:00.169 09:34:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:24:00.169 00:24:00.169 real 5m5.704s 00:24:00.169 user 12m45.228s 00:24:00.169 sys 1m24.931s 00:24:00.169 09:34:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:00.169 ************************************ 00:24:00.169 END TEST nvmf_host 00:24:00.169 ************************************ 00:24:00.169 09:34:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.169 09:34:37 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:24:00.169 09:34:37 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:24:00.169 ************************************ 00:24:00.169 END TEST nvmf_tcp 00:24:00.169 ************************************ 00:24:00.169 00:24:00.169 real 12m24.599s 00:24:00.169 user 28m28.961s 00:24:00.169 sys 3m49.921s 00:24:00.169 09:34:37 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:00.169 09:34:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:00.426 09:34:37 -- spdk/autotest.sh@285 -- # [[ 1 -eq 0 ]] 00:24:00.426 09:34:37 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:24:00.426 09:34:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:00.426 09:34:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:00.426 09:34:37 -- common/autotest_common.sh@10 -- # set +x 00:24:00.426 ************************************ 00:24:00.426 START TEST nvmf_dif 00:24:00.426 ************************************ 00:24:00.426 09:34:37 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:24:00.426 * Looking for test storage... 00:24:00.426 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:00.427 09:34:38 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:00.427 09:34:38 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:24:00.427 09:34:38 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:00.427 09:34:38 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:00.427 09:34:38 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:00.427 09:34:38 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:00.427 09:34:38 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:00.427 09:34:38 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:24:00.427 09:34:38 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:24:00.427 09:34:38 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:24:00.427 09:34:38 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:24:00.427 09:34:38 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:24:00.427 09:34:38 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:24:00.427 09:34:38 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:24:00.427 09:34:38 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:00.427 09:34:38 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:24:00.427 09:34:38 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:24:00.427 09:34:38 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:00.427 09:34:38 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:00.427 09:34:38 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:24:00.427 09:34:38 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:24:00.427 09:34:38 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:00.427 09:34:38 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:24:00.427 09:34:38 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:24:00.427 09:34:38 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:24:00.427 09:34:38 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:24:00.427 09:34:38 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:00.427 09:34:38 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:24:00.427 09:34:38 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:24:00.427 09:34:38 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:00.427 09:34:38 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:00.427 09:34:38 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:24:00.427 09:34:38 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:00.427 09:34:38 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:00.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.427 --rc genhtml_branch_coverage=1 00:24:00.427 --rc genhtml_function_coverage=1 00:24:00.427 --rc genhtml_legend=1 00:24:00.427 --rc geninfo_all_blocks=1 00:24:00.427 --rc geninfo_unexecuted_blocks=1 00:24:00.427 00:24:00.427 ' 00:24:00.427 09:34:38 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:00.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.427 --rc genhtml_branch_coverage=1 00:24:00.427 --rc genhtml_function_coverage=1 00:24:00.427 --rc genhtml_legend=1 00:24:00.427 --rc geninfo_all_blocks=1 00:24:00.427 --rc geninfo_unexecuted_blocks=1 00:24:00.427 00:24:00.427 ' 00:24:00.427 09:34:38 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:00.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.427 --rc genhtml_branch_coverage=1 00:24:00.427 --rc genhtml_function_coverage=1 00:24:00.427 --rc genhtml_legend=1 00:24:00.427 --rc geninfo_all_blocks=1 00:24:00.427 --rc geninfo_unexecuted_blocks=1 00:24:00.427 00:24:00.427 ' 00:24:00.427 09:34:38 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:00.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.427 --rc genhtml_branch_coverage=1 00:24:00.427 --rc genhtml_function_coverage=1 00:24:00.427 --rc genhtml_legend=1 00:24:00.427 --rc geninfo_all_blocks=1 00:24:00.427 --rc geninfo_unexecuted_blocks=1 00:24:00.427 00:24:00.427 ' 00:24:00.427 09:34:38 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:00.427 09:34:38 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:24:00.427 09:34:38 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:00.427 09:34:38 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:00.427 09:34:38 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:00.427 09:34:38 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:00.427 09:34:38 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:00.427 09:34:38 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:00.427 09:34:38 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:00.686 09:34:38 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:24:00.686 09:34:38 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:00.686 09:34:38 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:00.686 09:34:38 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:00.686 09:34:38 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.686 09:34:38 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.686 09:34:38 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.686 09:34:38 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:24:00.686 09:34:38 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:00.686 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:00.686 09:34:38 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:24:00.686 09:34:38 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:24:00.686 09:34:38 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:24:00.686 09:34:38 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:24:00.686 09:34:38 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.686 09:34:38 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:00.686 09:34:38 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:00.686 Cannot find device "nvmf_init_br" 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@162 -- # true 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:00.686 Cannot find device "nvmf_init_br2" 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@163 -- # true 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:00.686 Cannot find device "nvmf_tgt_br" 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@164 -- # true 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:00.686 Cannot find device "nvmf_tgt_br2" 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@165 -- # true 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:00.686 Cannot find device "nvmf_init_br" 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@166 -- # true 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:00.686 Cannot find device "nvmf_init_br2" 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@167 -- # true 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:00.686 Cannot find device "nvmf_tgt_br" 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@168 -- # true 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:00.686 Cannot find device "nvmf_tgt_br2" 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@169 -- # true 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:00.686 Cannot find device "nvmf_br" 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@170 -- # true 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:00.686 Cannot find device "nvmf_init_if" 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@171 -- # true 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:00.686 Cannot find device "nvmf_init_if2" 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@172 -- # true 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:00.686 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@173 -- # true 00:24:00.686 09:34:38 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:00.687 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:00.687 09:34:38 nvmf_dif -- nvmf/common.sh@174 -- # true 00:24:00.687 09:34:38 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:00.997 09:34:38 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:00.997 09:34:38 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:00.997 09:34:38 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:00.997 09:34:38 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:00.997 09:34:38 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:00.997 09:34:38 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:00.997 09:34:38 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:00.997 09:34:38 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:00.997 09:34:38 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:00.997 09:34:38 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:00.997 09:34:38 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:00.997 09:34:38 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:00.997 09:34:38 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:00.997 09:34:38 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:00.997 09:34:38 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:00.997 09:34:38 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:00.997 09:34:38 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:00.997 09:34:38 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:00.997 09:34:38 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:00.997 09:34:38 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:00.997 09:34:38 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:00.997 09:34:38 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:00.997 09:34:38 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:00.997 09:34:38 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:00.997 09:34:38 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:00.997 09:34:38 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:00.997 09:34:38 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:00.997 09:34:38 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:00.997 09:34:38 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:00.997 09:34:38 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:00.997 09:34:38 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:00.997 09:34:38 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:00.997 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:00.997 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.114 ms 00:24:00.997 00:24:00.997 --- 10.0.0.3 ping statistics --- 00:24:00.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.997 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:24:00.997 09:34:38 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:00.997 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:00.997 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:24:00.997 00:24:00.997 --- 10.0.0.4 ping statistics --- 00:24:00.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.997 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:24:00.997 09:34:38 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:00.997 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:00.997 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:24:00.997 00:24:00.997 --- 10.0.0.1 ping statistics --- 00:24:00.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.997 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:24:00.997 09:34:38 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:00.997 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:00.997 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:24:00.997 00:24:00.997 --- 10.0.0.2 ping statistics --- 00:24:00.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.997 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:24:00.997 09:34:38 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:00.997 09:34:38 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:24:00.997 09:34:38 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:24:00.997 09:34:38 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:01.564 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:01.564 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:01.564 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:01.564 09:34:39 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:01.564 09:34:39 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:01.564 09:34:39 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:01.564 09:34:39 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:01.564 09:34:39 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:01.564 09:34:39 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:01.564 09:34:39 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:24:01.564 09:34:39 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:24:01.564 09:34:39 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:01.564 09:34:39 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:01.564 09:34:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:01.564 09:34:39 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:01.564 09:34:39 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=82706 00:24:01.564 09:34:39 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 82706 00:24:01.564 09:34:39 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 82706 ']' 00:24:01.564 09:34:39 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:01.564 09:34:39 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:01.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:01.564 09:34:39 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:01.564 09:34:39 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:01.564 09:34:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:01.821 [2024-12-09 09:34:39.314184] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:24:01.821 [2024-12-09 09:34:39.314251] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:01.821 [2024-12-09 09:34:39.464781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.821 [2024-12-09 09:34:39.510261] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:01.821 [2024-12-09 09:34:39.510535] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:01.821 [2024-12-09 09:34:39.510553] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:01.821 [2024-12-09 09:34:39.510562] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:01.821 [2024-12-09 09:34:39.510569] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:01.821 [2024-12-09 09:34:39.510852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:02.114 [2024-12-09 09:34:39.552943] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:02.680 09:34:40 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:02.680 09:34:40 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:24:02.680 09:34:40 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:02.680 09:34:40 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:02.680 09:34:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:02.680 09:34:40 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:02.680 09:34:40 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:24:02.680 09:34:40 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:24:02.680 09:34:40 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.680 09:34:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:02.680 [2024-12-09 09:34:40.237808] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:02.680 09:34:40 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.680 09:34:40 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:24:02.680 09:34:40 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:02.680 09:34:40 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:02.680 09:34:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:02.680 ************************************ 00:24:02.680 START TEST fio_dif_1_default 00:24:02.680 ************************************ 00:24:02.680 09:34:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:24:02.680 09:34:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:24:02.680 09:34:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:24:02.680 09:34:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:24:02.680 09:34:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:24:02.680 09:34:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:24:02.680 09:34:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:24:02.680 09:34:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.680 09:34:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:02.680 bdev_null0 00:24:02.680 09:34:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.680 09:34:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:02.680 09:34:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.680 09:34:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:02.680 09:34:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.680 09:34:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:02.680 09:34:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.680 09:34:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:02.680 09:34:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.680 09:34:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:02.680 09:34:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.680 09:34:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:02.680 [2024-12-09 09:34:40.301822] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:02.680 09:34:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.680 09:34:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:24:02.680 09:34:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:24:02.680 09:34:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:24:02.680 09:34:40 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:24:02.680 09:34:40 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:24:02.680 09:34:40 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:02.680 09:34:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:02.680 09:34:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:02.680 09:34:40 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:02.680 { 00:24:02.680 "params": { 00:24:02.680 "name": "Nvme$subsystem", 00:24:02.680 "trtype": "$TEST_TRANSPORT", 00:24:02.680 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:02.680 "adrfam": "ipv4", 00:24:02.680 "trsvcid": "$NVMF_PORT", 00:24:02.680 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:02.680 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:02.680 "hdgst": ${hdgst:-false}, 00:24:02.680 "ddgst": ${ddgst:-false} 00:24:02.680 }, 00:24:02.680 "method": "bdev_nvme_attach_controller" 00:24:02.680 } 00:24:02.680 EOF 00:24:02.680 )") 00:24:02.680 09:34:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:02.680 09:34:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:02.680 09:34:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:02.680 09:34:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:02.680 09:34:40 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:24:02.680 09:34:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:24:02.680 09:34:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:24:02.680 09:34:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:02.680 09:34:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:02.680 09:34:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:24:02.680 09:34:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:24:02.681 09:34:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:02.681 09:34:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:24:02.681 09:34:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:02.681 09:34:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:24:02.681 09:34:40 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:24:02.681 09:34:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:24:02.681 09:34:40 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:24:02.681 09:34:40 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:02.681 "params": { 00:24:02.681 "name": "Nvme0", 00:24:02.681 "trtype": "tcp", 00:24:02.681 "traddr": "10.0.0.3", 00:24:02.681 "adrfam": "ipv4", 00:24:02.681 "trsvcid": "4420", 00:24:02.681 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:02.681 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:02.681 "hdgst": false, 00:24:02.681 "ddgst": false 00:24:02.681 }, 00:24:02.681 "method": "bdev_nvme_attach_controller" 00:24:02.681 }' 00:24:02.681 09:34:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:02.681 09:34:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:02.681 09:34:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:02.681 09:34:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:02.681 09:34:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:02.681 09:34:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:02.940 09:34:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:02.940 09:34:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:02.940 09:34:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:02.940 09:34:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:02.940 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:24:02.940 fio-3.35 00:24:02.940 Starting 1 thread 00:24:15.151 00:24:15.151 filename0: (groupid=0, jobs=1): err= 0: pid=82773: Mon Dec 9 09:34:51 2024 00:24:15.151 read: IOPS=12.1k, BW=47.2MiB/s (49.5MB/s)(472MiB/10001msec) 00:24:15.151 slat (usec): min=5, max=151, avg= 6.08, stdev= 1.19 00:24:15.151 clat (usec): min=286, max=1699, avg=314.68, stdev=19.76 00:24:15.151 lat (usec): min=292, max=1766, avg=320.75, stdev=19.98 00:24:15.151 clat percentiles (usec): 00:24:15.151 | 1.00th=[ 297], 5.00th=[ 302], 10.00th=[ 302], 20.00th=[ 306], 00:24:15.151 | 30.00th=[ 310], 40.00th=[ 314], 50.00th=[ 314], 60.00th=[ 318], 00:24:15.151 | 70.00th=[ 318], 80.00th=[ 322], 90.00th=[ 326], 95.00th=[ 330], 00:24:15.151 | 99.00th=[ 347], 99.50th=[ 371], 99.90th=[ 482], 99.95th=[ 553], 00:24:15.151 | 99.99th=[ 1237] 00:24:15.151 bw ( KiB/s): min=47744, max=48480, per=100.00%, avg=48336.84, stdev=165.72, samples=19 00:24:15.151 iops : min=11936, max=12120, avg=12084.21, stdev=41.43, samples=19 00:24:15.151 lat (usec) : 500=99.92%, 750=0.06%, 1000=0.01% 00:24:15.151 lat (msec) : 2=0.01% 00:24:15.151 cpu : usr=80.81%, sys=17.66%, ctx=117, majf=0, minf=9 00:24:15.151 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:15.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.151 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.151 issued rwts: total=120740,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:15.151 latency : target=0, window=0, percentile=100.00%, depth=4 00:24:15.151 00:24:15.151 Run status group 0 (all jobs): 00:24:15.151 READ: bw=47.2MiB/s (49.5MB/s), 47.2MiB/s-47.2MiB/s (49.5MB/s-49.5MB/s), io=472MiB (495MB), run=10001-10001msec 00:24:15.151 09:34:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:24:15.151 09:34:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:24:15.151 09:34:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:24:15.151 09:34:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:15.151 09:34:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:24:15.151 09:34:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:15.151 09:34:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.151 09:34:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:15.151 09:34:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.151 09:34:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:15.151 09:34:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.151 09:34:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:15.151 ************************************ 00:24:15.151 END TEST fio_dif_1_default 00:24:15.151 ************************************ 00:24:15.151 09:34:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.151 00:24:15.151 real 0m11.036s 00:24:15.151 user 0m8.744s 00:24:15.151 sys 0m2.072s 00:24:15.151 09:34:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:15.151 09:34:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:15.151 09:34:51 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:24:15.151 09:34:51 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:15.151 09:34:51 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:15.151 09:34:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:15.151 ************************************ 00:24:15.151 START TEST fio_dif_1_multi_subsystems 00:24:15.151 ************************************ 00:24:15.151 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:24:15.151 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:24:15.151 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:24:15.151 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:24:15.151 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:24:15.151 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:24:15.151 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:24:15.151 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:24:15.151 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.151 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:15.151 bdev_null0 00:24:15.151 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.151 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:15.151 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.151 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:15.151 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.151 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:15.151 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.151 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:15.151 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:15.152 [2024-12-09 09:34:51.416395] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:15.152 bdev_null1 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:15.152 { 00:24:15.152 "params": { 00:24:15.152 "name": "Nvme$subsystem", 00:24:15.152 "trtype": "$TEST_TRANSPORT", 00:24:15.152 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:15.152 "adrfam": "ipv4", 00:24:15.152 "trsvcid": "$NVMF_PORT", 00:24:15.152 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:15.152 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:15.152 "hdgst": ${hdgst:-false}, 00:24:15.152 "ddgst": ${ddgst:-false} 00:24:15.152 }, 00:24:15.152 "method": "bdev_nvme_attach_controller" 00:24:15.152 } 00:24:15.152 EOF 00:24:15.152 )") 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:15.152 { 00:24:15.152 "params": { 00:24:15.152 "name": "Nvme$subsystem", 00:24:15.152 "trtype": "$TEST_TRANSPORT", 00:24:15.152 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:15.152 "adrfam": "ipv4", 00:24:15.152 "trsvcid": "$NVMF_PORT", 00:24:15.152 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:15.152 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:15.152 "hdgst": ${hdgst:-false}, 00:24:15.152 "ddgst": ${ddgst:-false} 00:24:15.152 }, 00:24:15.152 "method": "bdev_nvme_attach_controller" 00:24:15.152 } 00:24:15.152 EOF 00:24:15.152 )") 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:15.152 "params": { 00:24:15.152 "name": "Nvme0", 00:24:15.152 "trtype": "tcp", 00:24:15.152 "traddr": "10.0.0.3", 00:24:15.152 "adrfam": "ipv4", 00:24:15.152 "trsvcid": "4420", 00:24:15.152 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:15.152 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:15.152 "hdgst": false, 00:24:15.152 "ddgst": false 00:24:15.152 }, 00:24:15.152 "method": "bdev_nvme_attach_controller" 00:24:15.152 },{ 00:24:15.152 "params": { 00:24:15.152 "name": "Nvme1", 00:24:15.152 "trtype": "tcp", 00:24:15.152 "traddr": "10.0.0.3", 00:24:15.152 "adrfam": "ipv4", 00:24:15.152 "trsvcid": "4420", 00:24:15.152 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:15.152 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:15.152 "hdgst": false, 00:24:15.152 "ddgst": false 00:24:15.152 }, 00:24:15.152 "method": "bdev_nvme_attach_controller" 00:24:15.152 }' 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:15.152 09:34:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:15.152 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:24:15.152 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:24:15.152 fio-3.35 00:24:15.152 Starting 2 threads 00:24:25.165 00:24:25.165 filename0: (groupid=0, jobs=1): err= 0: pid=82939: Mon Dec 9 09:35:02 2024 00:24:25.165 read: IOPS=6361, BW=24.9MiB/s (26.1MB/s)(249MiB/10001msec) 00:24:25.165 slat (nsec): min=5840, max=54778, avg=11136.98, stdev=3104.47 00:24:25.165 clat (usec): min=480, max=1947, avg=599.21, stdev=31.00 00:24:25.165 lat (usec): min=489, max=1967, avg=610.35, stdev=32.03 00:24:25.165 clat percentiles (usec): 00:24:25.165 | 1.00th=[ 529], 5.00th=[ 545], 10.00th=[ 562], 20.00th=[ 578], 00:24:25.165 | 30.00th=[ 586], 40.00th=[ 594], 50.00th=[ 603], 60.00th=[ 611], 00:24:25.165 | 70.00th=[ 619], 80.00th=[ 619], 90.00th=[ 635], 95.00th=[ 635], 00:24:25.165 | 99.00th=[ 652], 99.50th=[ 660], 99.90th=[ 709], 99.95th=[ 758], 00:24:25.165 | 99.99th=[ 1106] 00:24:25.165 bw ( KiB/s): min=25280, max=25664, per=50.03%, avg=25461.89, stdev=86.03, samples=19 00:24:25.165 iops : min= 6320, max= 6416, avg=6365.47, stdev=21.51, samples=19 00:24:25.165 lat (usec) : 500=0.04%, 750=99.90%, 1000=0.02% 00:24:25.165 lat (msec) : 2=0.03% 00:24:25.165 cpu : usr=88.07%, sys=10.88%, ctx=11, majf=0, minf=0 00:24:25.165 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:25.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:25.165 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:25.165 issued rwts: total=63624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:25.165 latency : target=0, window=0, percentile=100.00%, depth=4 00:24:25.165 filename1: (groupid=0, jobs=1): err= 0: pid=82940: Mon Dec 9 09:35:02 2024 00:24:25.165 read: IOPS=6361, BW=24.9MiB/s (26.1MB/s)(249MiB/10001msec) 00:24:25.165 slat (nsec): min=5808, max=38374, avg=10912.81, stdev=3090.43 00:24:25.165 clat (usec): min=499, max=1930, avg=599.96, stdev=24.00 00:24:25.165 lat (usec): min=508, max=1936, avg=610.87, stdev=24.29 00:24:25.165 clat percentiles (usec): 00:24:25.165 | 1.00th=[ 562], 5.00th=[ 570], 10.00th=[ 578], 20.00th=[ 586], 00:24:25.165 | 30.00th=[ 586], 40.00th=[ 594], 50.00th=[ 603], 60.00th=[ 603], 00:24:25.165 | 70.00th=[ 611], 80.00th=[ 619], 90.00th=[ 627], 95.00th=[ 635], 00:24:25.165 | 99.00th=[ 652], 99.50th=[ 660], 99.90th=[ 701], 99.95th=[ 742], 00:24:25.165 | 99.99th=[ 1123] 00:24:25.165 bw ( KiB/s): min=25280, max=25664, per=50.03%, avg=25461.89, stdev=84.70, samples=19 00:24:25.165 iops : min= 6320, max= 6416, avg=6365.47, stdev=21.17, samples=19 00:24:25.165 lat (usec) : 500=0.01%, 750=99.95%, 1000=0.02% 00:24:25.165 lat (msec) : 2=0.03% 00:24:25.165 cpu : usr=88.03%, sys=10.99%, ctx=23, majf=0, minf=0 00:24:25.165 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:25.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:25.165 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:25.165 issued rwts: total=63624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:25.165 latency : target=0, window=0, percentile=100.00%, depth=4 00:24:25.165 00:24:25.165 Run status group 0 (all jobs): 00:24:25.165 READ: bw=49.7MiB/s (52.1MB/s), 24.9MiB/s-24.9MiB/s (26.1MB/s-26.1MB/s), io=497MiB (521MB), run=10001-10001msec 00:24:25.165 09:35:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:24:25.165 09:35:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:24:25.165 09:35:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:24:25.166 09:35:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:25.166 09:35:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:24:25.166 09:35:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:25.166 09:35:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.166 09:35:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:25.166 09:35:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.166 09:35:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:25.166 09:35:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.166 09:35:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:25.166 09:35:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.166 09:35:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:24:25.166 09:35:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:24:25.166 09:35:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:24:25.166 09:35:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:25.166 09:35:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.166 09:35:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:25.166 09:35:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.166 09:35:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:24:25.166 09:35:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.166 09:35:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:25.166 09:35:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.166 00:24:25.166 real 0m11.147s 00:24:25.166 user 0m18.363s 00:24:25.166 sys 0m2.529s 00:24:25.166 ************************************ 00:24:25.166 END TEST fio_dif_1_multi_subsystems 00:24:25.166 ************************************ 00:24:25.166 09:35:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:25.166 09:35:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:25.166 09:35:02 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:24:25.166 09:35:02 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:25.166 09:35:02 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:25.166 09:35:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:25.166 ************************************ 00:24:25.166 START TEST fio_dif_rand_params 00:24:25.166 ************************************ 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:25.166 bdev_null0 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:25.166 [2024-12-09 09:35:02.646769] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:25.166 { 00:24:25.166 "params": { 00:24:25.166 "name": "Nvme$subsystem", 00:24:25.166 "trtype": "$TEST_TRANSPORT", 00:24:25.166 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:25.166 "adrfam": "ipv4", 00:24:25.166 "trsvcid": "$NVMF_PORT", 00:24:25.166 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:25.166 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:25.166 "hdgst": ${hdgst:-false}, 00:24:25.166 "ddgst": ${ddgst:-false} 00:24:25.166 }, 00:24:25.166 "method": "bdev_nvme_attach_controller" 00:24:25.166 } 00:24:25.166 EOF 00:24:25.166 )") 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:25.166 "params": { 00:24:25.166 "name": "Nvme0", 00:24:25.166 "trtype": "tcp", 00:24:25.166 "traddr": "10.0.0.3", 00:24:25.166 "adrfam": "ipv4", 00:24:25.166 "trsvcid": "4420", 00:24:25.166 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:25.166 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:25.166 "hdgst": false, 00:24:25.166 "ddgst": false 00:24:25.166 }, 00:24:25.166 "method": "bdev_nvme_attach_controller" 00:24:25.166 }' 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:25.166 09:35:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:25.425 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:24:25.425 ... 00:24:25.425 fio-3.35 00:24:25.425 Starting 3 threads 00:24:31.985 00:24:31.985 filename0: (groupid=0, jobs=1): err= 0: pid=83096: Mon Dec 9 09:35:08 2024 00:24:31.985 read: IOPS=326, BW=40.8MiB/s (42.8MB/s)(204MiB/5009msec) 00:24:31.985 slat (nsec): min=5838, max=37060, avg=13942.75, stdev=4183.60 00:24:31.985 clat (usec): min=7546, max=9691, avg=9160.94, stdev=84.16 00:24:31.985 lat (usec): min=7556, max=9702, avg=9174.88, stdev=84.20 00:24:31.985 clat percentiles (usec): 00:24:31.985 | 1.00th=[ 9110], 5.00th=[ 9110], 10.00th=[ 9110], 20.00th=[ 9110], 00:24:31.985 | 30.00th=[ 9110], 40.00th=[ 9110], 50.00th=[ 9110], 60.00th=[ 9110], 00:24:31.985 | 70.00th=[ 9110], 80.00th=[ 9241], 90.00th=[ 9241], 95.00th=[ 9241], 00:24:31.985 | 99.00th=[ 9241], 99.50th=[ 9634], 99.90th=[ 9634], 99.95th=[ 9634], 00:24:31.985 | 99.99th=[ 9634] 00:24:31.985 bw ( KiB/s): min=41472, max=42240, per=33.37%, avg=41779.20, stdev=396.59, samples=10 00:24:31.985 iops : min= 324, max= 330, avg=326.40, stdev= 3.10, samples=10 00:24:31.985 lat (msec) : 10=100.00% 00:24:31.985 cpu : usr=88.44%, sys=11.16%, ctx=15, majf=0, minf=0 00:24:31.985 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:31.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:31.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:31.985 issued rwts: total=1635,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:31.985 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:31.985 filename0: (groupid=0, jobs=1): err= 0: pid=83097: Mon Dec 9 09:35:08 2024 00:24:31.985 read: IOPS=326, BW=40.8MiB/s (42.8MB/s)(204MiB/5002msec) 00:24:31.985 slat (usec): min=6, max=166, avg=14.79, stdev= 5.71 00:24:31.985 clat (usec): min=9019, max=9825, avg=9161.30, stdev=55.42 00:24:31.985 lat (usec): min=9027, max=9855, avg=9176.09, stdev=55.83 00:24:31.985 clat percentiles (usec): 00:24:31.985 | 1.00th=[ 9110], 5.00th=[ 9110], 10.00th=[ 9110], 20.00th=[ 9110], 00:24:31.985 | 30.00th=[ 9110], 40.00th=[ 9110], 50.00th=[ 9110], 60.00th=[ 9110], 00:24:31.985 | 70.00th=[ 9110], 80.00th=[ 9241], 90.00th=[ 9241], 95.00th=[ 9241], 00:24:31.985 | 99.00th=[ 9372], 99.50th=[ 9634], 99.90th=[ 9765], 99.95th=[ 9765], 00:24:31.985 | 99.99th=[ 9765] 00:24:31.985 bw ( KiB/s): min=41472, max=42240, per=33.33%, avg=41728.00, stdev=384.00, samples=9 00:24:31.985 iops : min= 324, max= 330, avg=326.00, stdev= 3.00, samples=9 00:24:31.985 lat (msec) : 10=100.00% 00:24:31.985 cpu : usr=88.28%, sys=11.08%, ctx=84, majf=0, minf=0 00:24:31.985 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:31.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:31.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:31.985 issued rwts: total=1632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:31.985 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:31.985 filename0: (groupid=0, jobs=1): err= 0: pid=83098: Mon Dec 9 09:35:08 2024 00:24:31.985 read: IOPS=326, BW=40.8MiB/s (42.8MB/s)(204MiB/5002msec) 00:24:31.985 slat (nsec): min=6096, max=39821, avg=15114.11, stdev=3777.89 00:24:31.985 clat (usec): min=9051, max=9766, avg=9160.17, stdev=56.20 00:24:31.985 lat (usec): min=9064, max=9795, avg=9175.28, stdev=56.86 00:24:31.985 clat percentiles (usec): 00:24:31.985 | 1.00th=[ 9110], 5.00th=[ 9110], 10.00th=[ 9110], 20.00th=[ 9110], 00:24:31.985 | 30.00th=[ 9110], 40.00th=[ 9110], 50.00th=[ 9110], 60.00th=[ 9110], 00:24:31.985 | 70.00th=[ 9110], 80.00th=[ 9241], 90.00th=[ 9241], 95.00th=[ 9241], 00:24:31.985 | 99.00th=[ 9372], 99.50th=[ 9634], 99.90th=[ 9765], 99.95th=[ 9765], 00:24:31.985 | 99.99th=[ 9765] 00:24:31.985 bw ( KiB/s): min=41472, max=42240, per=33.33%, avg=41728.00, stdev=384.00, samples=9 00:24:31.985 iops : min= 324, max= 330, avg=326.00, stdev= 3.00, samples=9 00:24:31.985 lat (msec) : 10=100.00% 00:24:31.985 cpu : usr=87.74%, sys=11.82%, ctx=11, majf=0, minf=0 00:24:31.985 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:31.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:31.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:31.985 issued rwts: total=1632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:31.985 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:31.985 00:24:31.985 Run status group 0 (all jobs): 00:24:31.985 READ: bw=122MiB/s (128MB/s), 40.8MiB/s-40.8MiB/s (42.8MB/s-42.8MB/s), io=612MiB (642MB), run=5002-5009msec 00:24:31.985 09:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:24:31.985 09:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:24:31.985 09:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:31.985 09:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:31.985 09:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:24:31.985 09:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:31.985 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.985 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:31.985 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.985 09:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:31.985 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.985 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:31.985 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.985 09:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:24:31.985 09:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:24:31.985 09:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:24:31.985 09:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:24:31.985 09:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:24:31.985 09:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:24:31.985 09:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:24:31.985 09:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:24:31.985 09:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:31.985 09:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:24:31.985 09:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:24:31.985 09:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:24:31.985 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.985 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:31.985 bdev_null0 00:24:31.985 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.985 09:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:31.986 [2024-12-09 09:35:08.688977] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:31.986 bdev_null1 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:31.986 bdev_null2 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:31.986 { 00:24:31.986 "params": { 00:24:31.986 "name": "Nvme$subsystem", 00:24:31.986 "trtype": "$TEST_TRANSPORT", 00:24:31.986 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:31.986 "adrfam": "ipv4", 00:24:31.986 "trsvcid": "$NVMF_PORT", 00:24:31.986 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:31.986 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:31.986 "hdgst": ${hdgst:-false}, 00:24:31.986 "ddgst": ${ddgst:-false} 00:24:31.986 }, 00:24:31.986 "method": "bdev_nvme_attach_controller" 00:24:31.986 } 00:24:31.986 EOF 00:24:31.986 )") 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:24:31.986 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:31.987 09:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:24:31.987 09:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:31.987 09:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:24:31.987 09:35:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:31.987 09:35:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:31.987 { 00:24:31.987 "params": { 00:24:31.987 "name": "Nvme$subsystem", 00:24:31.987 "trtype": "$TEST_TRANSPORT", 00:24:31.987 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:31.987 "adrfam": "ipv4", 00:24:31.987 "trsvcid": "$NVMF_PORT", 00:24:31.987 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:31.987 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:31.987 "hdgst": ${hdgst:-false}, 00:24:31.987 "ddgst": ${ddgst:-false} 00:24:31.987 }, 00:24:31.987 "method": "bdev_nvme_attach_controller" 00:24:31.987 } 00:24:31.987 EOF 00:24:31.987 )") 00:24:31.987 09:35:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:24:31.987 09:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:24:31.987 09:35:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:31.987 09:35:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:31.987 09:35:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:31.987 { 00:24:31.987 "params": { 00:24:31.987 "name": "Nvme$subsystem", 00:24:31.987 "trtype": "$TEST_TRANSPORT", 00:24:31.987 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:31.987 "adrfam": "ipv4", 00:24:31.987 "trsvcid": "$NVMF_PORT", 00:24:31.987 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:31.987 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:31.987 "hdgst": ${hdgst:-false}, 00:24:31.987 "ddgst": ${ddgst:-false} 00:24:31.987 }, 00:24:31.987 "method": "bdev_nvme_attach_controller" 00:24:31.987 } 00:24:31.987 EOF 00:24:31.987 )") 00:24:31.987 09:35:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:24:31.987 09:35:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:24:31.987 09:35:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:24:31.987 09:35:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:31.987 "params": { 00:24:31.987 "name": "Nvme0", 00:24:31.987 "trtype": "tcp", 00:24:31.987 "traddr": "10.0.0.3", 00:24:31.987 "adrfam": "ipv4", 00:24:31.987 "trsvcid": "4420", 00:24:31.987 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:31.987 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:31.987 "hdgst": false, 00:24:31.987 "ddgst": false 00:24:31.987 }, 00:24:31.987 "method": "bdev_nvme_attach_controller" 00:24:31.987 },{ 00:24:31.987 "params": { 00:24:31.987 "name": "Nvme1", 00:24:31.987 "trtype": "tcp", 00:24:31.987 "traddr": "10.0.0.3", 00:24:31.987 "adrfam": "ipv4", 00:24:31.987 "trsvcid": "4420", 00:24:31.987 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:31.987 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:31.987 "hdgst": false, 00:24:31.987 "ddgst": false 00:24:31.987 }, 00:24:31.987 "method": "bdev_nvme_attach_controller" 00:24:31.987 },{ 00:24:31.987 "params": { 00:24:31.987 "name": "Nvme2", 00:24:31.987 "trtype": "tcp", 00:24:31.987 "traddr": "10.0.0.3", 00:24:31.987 "adrfam": "ipv4", 00:24:31.987 "trsvcid": "4420", 00:24:31.987 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:31.987 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:31.987 "hdgst": false, 00:24:31.987 "ddgst": false 00:24:31.987 }, 00:24:31.987 "method": "bdev_nvme_attach_controller" 00:24:31.987 }' 00:24:31.987 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:31.987 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:31.987 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:31.987 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:31.987 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:31.987 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:31.987 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:31.987 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:31.987 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:31.987 09:35:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:31.987 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:24:31.987 ... 00:24:31.987 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:24:31.987 ... 00:24:31.987 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:24:31.987 ... 00:24:31.987 fio-3.35 00:24:31.987 Starting 24 threads 00:24:44.196 00:24:44.196 filename0: (groupid=0, jobs=1): err= 0: pid=83199: Mon Dec 9 09:35:19 2024 00:24:44.196 read: IOPS=297, BW=1189KiB/s (1217kB/s)(11.7MiB/10038msec) 00:24:44.197 slat (usec): min=6, max=8018, avg=23.62, stdev=218.17 00:24:44.197 clat (msec): min=2, max=119, avg=53.67, stdev=20.17 00:24:44.197 lat (msec): min=2, max=119, avg=53.70, stdev=20.17 00:24:44.197 clat percentiles (msec): 00:24:44.197 | 1.00th=[ 3], 5.00th=[ 16], 10.00th=[ 32], 20.00th=[ 40], 00:24:44.197 | 30.00th=[ 48], 40.00th=[ 51], 50.00th=[ 55], 60.00th=[ 58], 00:24:44.197 | 70.00th=[ 61], 80.00th=[ 66], 90.00th=[ 83], 95.00th=[ 88], 00:24:44.197 | 99.00th=[ 97], 99.50th=[ 113], 99.90th=[ 120], 99.95th=[ 120], 00:24:44.197 | 99.99th=[ 120] 00:24:44.197 bw ( KiB/s): min= 792, max= 2816, per=4.24%, avg=1188.70, stdev=411.04, samples=20 00:24:44.197 iops : min= 198, max= 704, avg=297.15, stdev=102.76, samples=20 00:24:44.197 lat (msec) : 4=2.68%, 10=0.54%, 20=2.61%, 50=33.93%, 100=59.60% 00:24:44.197 lat (msec) : 250=0.64% 00:24:44.197 cpu : usr=41.73%, sys=2.66%, ctx=1139, majf=0, minf=0 00:24:44.197 IO depths : 1=0.2%, 2=1.2%, 4=4.1%, 8=78.4%, 16=16.1%, 32=0.0%, >=64=0.0% 00:24:44.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.197 complete : 0=0.0%, 4=88.7%, 8=10.4%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.197 issued rwts: total=2983,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.197 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:44.197 filename0: (groupid=0, jobs=1): err= 0: pid=83200: Mon Dec 9 09:35:19 2024 00:24:44.197 read: IOPS=293, BW=1176KiB/s (1204kB/s)(11.5MiB/10007msec) 00:24:44.197 slat (usec): min=4, max=9266, avg=24.72, stdev=271.14 00:24:44.197 clat (msec): min=6, max=107, avg=54.29, stdev=16.73 00:24:44.197 lat (msec): min=6, max=107, avg=54.31, stdev=16.73 00:24:44.197 clat percentiles (msec): 00:24:44.197 | 1.00th=[ 18], 5.00th=[ 32], 10.00th=[ 35], 20.00th=[ 39], 00:24:44.197 | 30.00th=[ 46], 40.00th=[ 49], 50.00th=[ 56], 60.00th=[ 59], 00:24:44.197 | 70.00th=[ 61], 80.00th=[ 64], 90.00th=[ 82], 95.00th=[ 85], 00:24:44.197 | 99.00th=[ 95], 99.50th=[ 96], 99.90th=[ 102], 99.95th=[ 102], 00:24:44.197 | 99.99th=[ 108] 00:24:44.197 bw ( KiB/s): min= 816, max= 1405, per=4.13%, avg=1156.47, stdev=171.34, samples=19 00:24:44.197 iops : min= 204, max= 351, avg=289.11, stdev=42.82, samples=19 00:24:44.197 lat (msec) : 10=0.44%, 20=0.75%, 50=41.16%, 100=57.48%, 250=0.17% 00:24:44.197 cpu : usr=33.10%, sys=2.34%, ctx=1014, majf=0, minf=9 00:24:44.197 IO depths : 1=0.1%, 2=0.5%, 4=2.3%, 8=81.2%, 16=15.9%, 32=0.0%, >=64=0.0% 00:24:44.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.197 complete : 0=0.0%, 4=87.7%, 8=11.8%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.197 issued rwts: total=2942,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.197 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:44.197 filename0: (groupid=0, jobs=1): err= 0: pid=83201: Mon Dec 9 09:35:19 2024 00:24:44.197 read: IOPS=293, BW=1174KiB/s (1202kB/s)(11.5MiB/10024msec) 00:24:44.197 slat (usec): min=4, max=8064, avg=19.01, stdev=165.95 00:24:44.197 clat (msec): min=10, max=106, avg=54.39, stdev=16.79 00:24:44.197 lat (msec): min=10, max=106, avg=54.41, stdev=16.79 00:24:44.197 clat percentiles (msec): 00:24:44.197 | 1.00th=[ 18], 5.00th=[ 29], 10.00th=[ 35], 20.00th=[ 40], 00:24:44.197 | 30.00th=[ 47], 40.00th=[ 50], 50.00th=[ 55], 60.00th=[ 59], 00:24:44.197 | 70.00th=[ 61], 80.00th=[ 65], 90.00th=[ 82], 95.00th=[ 87], 00:24:44.197 | 99.00th=[ 95], 99.50th=[ 96], 99.90th=[ 100], 99.95th=[ 101], 00:24:44.197 | 99.99th=[ 107] 00:24:44.197 bw ( KiB/s): min= 840, max= 1928, per=4.18%, avg=1172.80, stdev=231.26, samples=20 00:24:44.197 iops : min= 210, max= 482, avg=293.20, stdev=57.81, samples=20 00:24:44.197 lat (msec) : 20=1.53%, 50=39.84%, 100=58.60%, 250=0.03% 00:24:44.197 cpu : usr=32.74%, sys=2.25%, ctx=1221, majf=0, minf=9 00:24:44.197 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=82.4%, 16=16.6%, 32=0.0%, >=64=0.0% 00:24:44.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.197 complete : 0=0.0%, 4=87.7%, 8=12.1%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.197 issued rwts: total=2942,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.197 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:44.197 filename0: (groupid=0, jobs=1): err= 0: pid=83202: Mon Dec 9 09:35:19 2024 00:24:44.197 read: IOPS=278, BW=1115KiB/s (1141kB/s)(10.9MiB/10045msec) 00:24:44.197 slat (usec): min=4, max=7521, avg=23.54, stdev=247.52 00:24:44.197 clat (msec): min=8, max=130, avg=57.23, stdev=17.23 00:24:44.197 lat (msec): min=8, max=130, avg=57.25, stdev=17.24 00:24:44.197 clat percentiles (msec): 00:24:44.197 | 1.00th=[ 16], 5.00th=[ 24], 10.00th=[ 36], 20.00th=[ 47], 00:24:44.197 | 30.00th=[ 48], 40.00th=[ 54], 50.00th=[ 58], 60.00th=[ 61], 00:24:44.197 | 70.00th=[ 63], 80.00th=[ 70], 90.00th=[ 83], 95.00th=[ 88], 00:24:44.197 | 99.00th=[ 96], 99.50th=[ 99], 99.90th=[ 115], 99.95th=[ 121], 00:24:44.197 | 99.99th=[ 131] 00:24:44.197 bw ( KiB/s): min= 736, max= 2048, per=3.97%, avg=1113.20, stdev=261.91, samples=20 00:24:44.197 iops : min= 184, max= 512, avg=278.30, stdev=65.48, samples=20 00:24:44.197 lat (msec) : 10=0.07%, 20=2.54%, 50=31.58%, 100=65.56%, 250=0.25% 00:24:44.197 cpu : usr=36.87%, sys=2.49%, ctx=1096, majf=0, minf=9 00:24:44.197 IO depths : 1=0.1%, 2=1.1%, 4=4.1%, 8=78.1%, 16=16.5%, 32=0.0%, >=64=0.0% 00:24:44.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.197 complete : 0=0.0%, 4=89.0%, 8=10.1%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.197 issued rwts: total=2799,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.197 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:44.197 filename0: (groupid=0, jobs=1): err= 0: pid=83203: Mon Dec 9 09:35:19 2024 00:24:44.197 read: IOPS=267, BW=1072KiB/s (1097kB/s)(10.5MiB/10029msec) 00:24:44.197 slat (usec): min=3, max=8041, avg=27.28, stdev=267.69 00:24:44.197 clat (msec): min=9, max=112, avg=59.51, stdev=16.68 00:24:44.197 lat (msec): min=9, max=112, avg=59.54, stdev=16.69 00:24:44.197 clat percentiles (msec): 00:24:44.197 | 1.00th=[ 19], 5.00th=[ 32], 10.00th=[ 39], 20.00th=[ 48], 00:24:44.197 | 30.00th=[ 53], 40.00th=[ 56], 50.00th=[ 59], 60.00th=[ 62], 00:24:44.197 | 70.00th=[ 68], 80.00th=[ 73], 90.00th=[ 84], 95.00th=[ 88], 00:24:44.197 | 99.00th=[ 95], 99.50th=[ 96], 99.90th=[ 108], 99.95th=[ 112], 00:24:44.197 | 99.99th=[ 113] 00:24:44.197 bw ( KiB/s): min= 816, max= 1760, per=3.81%, avg=1068.40, stdev=205.89, samples=20 00:24:44.197 iops : min= 204, max= 440, avg=267.10, stdev=51.47, samples=20 00:24:44.197 lat (msec) : 10=0.52%, 20=1.19%, 50=23.26%, 100=74.69%, 250=0.33% 00:24:44.197 cpu : usr=39.75%, sys=2.47%, ctx=1671, majf=0, minf=9 00:24:44.197 IO depths : 1=0.1%, 2=3.1%, 4=12.5%, 8=69.7%, 16=14.5%, 32=0.0%, >=64=0.0% 00:24:44.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.197 complete : 0=0.0%, 4=90.9%, 8=6.4%, 16=2.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.197 issued rwts: total=2687,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.197 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:44.197 filename0: (groupid=0, jobs=1): err= 0: pid=83204: Mon Dec 9 09:35:19 2024 00:24:44.197 read: IOPS=308, BW=1235KiB/s (1265kB/s)(12.1MiB/10003msec) 00:24:44.197 slat (usec): min=4, max=7066, avg=21.44, stdev=178.35 00:24:44.197 clat (usec): min=1712, max=99836, avg=51721.25, stdev=17393.23 00:24:44.197 lat (usec): min=1724, max=99850, avg=51742.69, stdev=17395.70 00:24:44.197 clat percentiles (msec): 00:24:44.197 | 1.00th=[ 4], 5.00th=[ 28], 10.00th=[ 33], 20.00th=[ 39], 00:24:44.197 | 30.00th=[ 41], 40.00th=[ 48], 50.00th=[ 53], 60.00th=[ 56], 00:24:44.197 | 70.00th=[ 58], 80.00th=[ 63], 90.00th=[ 79], 95.00th=[ 87], 00:24:44.197 | 99.00th=[ 94], 99.50th=[ 96], 99.90th=[ 101], 99.95th=[ 101], 00:24:44.197 | 99.99th=[ 101] 00:24:44.197 bw ( KiB/s): min= 840, max= 1552, per=4.29%, avg=1202.53, stdev=187.81, samples=19 00:24:44.197 iops : min= 210, max= 388, avg=300.63, stdev=46.95, samples=19 00:24:44.197 lat (msec) : 2=0.10%, 4=0.94%, 10=0.78%, 20=0.32%, 50=43.70% 00:24:44.197 lat (msec) : 100=54.16% 00:24:44.197 cpu : usr=38.36%, sys=2.57%, ctx=1333, majf=0, minf=9 00:24:44.197 IO depths : 1=0.1%, 2=0.4%, 4=1.5%, 8=82.4%, 16=15.7%, 32=0.0%, >=64=0.0% 00:24:44.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.197 complete : 0=0.0%, 4=87.2%, 8=12.4%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.197 issued rwts: total=3089,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.197 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:44.197 filename0: (groupid=0, jobs=1): err= 0: pid=83205: Mon Dec 9 09:35:19 2024 00:24:44.197 read: IOPS=299, BW=1198KiB/s (1226kB/s)(11.7MiB/10029msec) 00:24:44.197 slat (usec): min=5, max=7885, avg=30.45, stdev=285.06 00:24:44.197 clat (msec): min=14, max=116, avg=53.28, stdev=16.79 00:24:44.197 lat (msec): min=14, max=116, avg=53.31, stdev=16.81 00:24:44.197 clat percentiles (msec): 00:24:44.197 | 1.00th=[ 18], 5.00th=[ 27], 10.00th=[ 33], 20.00th=[ 39], 00:24:44.197 | 30.00th=[ 44], 40.00th=[ 49], 50.00th=[ 54], 60.00th=[ 57], 00:24:44.197 | 70.00th=[ 59], 80.00th=[ 64], 90.00th=[ 81], 95.00th=[ 87], 00:24:44.197 | 99.00th=[ 94], 99.50th=[ 96], 99.90th=[ 102], 99.95th=[ 107], 00:24:44.197 | 99.99th=[ 116] 00:24:44.197 bw ( KiB/s): min= 816, max= 1952, per=4.26%, avg=1194.40, stdev=242.01, samples=20 00:24:44.197 iops : min= 204, max= 488, avg=298.60, stdev=60.50, samples=20 00:24:44.197 lat (msec) : 20=1.27%, 50=40.86%, 100=57.68%, 250=0.20% 00:24:44.197 cpu : usr=40.83%, sys=2.48%, ctx=1442, majf=0, minf=9 00:24:44.197 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=82.3%, 16=16.1%, 32=0.0%, >=64=0.0% 00:24:44.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.197 complete : 0=0.0%, 4=87.4%, 8=12.3%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.197 issued rwts: total=3003,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.197 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:44.197 filename0: (groupid=0, jobs=1): err= 0: pid=83206: Mon Dec 9 09:35:19 2024 00:24:44.197 read: IOPS=298, BW=1194KiB/s (1223kB/s)(11.7MiB/10033msec) 00:24:44.197 slat (usec): min=5, max=4038, avg=20.85, stdev=164.11 00:24:44.197 clat (msec): min=5, max=119, avg=53.45, stdev=18.13 00:24:44.197 lat (msec): min=5, max=119, avg=53.47, stdev=18.14 00:24:44.197 clat percentiles (msec): 00:24:44.197 | 1.00th=[ 11], 5.00th=[ 24], 10.00th=[ 33], 20.00th=[ 40], 00:24:44.197 | 30.00th=[ 45], 40.00th=[ 49], 50.00th=[ 55], 60.00th=[ 57], 00:24:44.198 | 70.00th=[ 61], 80.00th=[ 65], 90.00th=[ 83], 95.00th=[ 87], 00:24:44.198 | 99.00th=[ 95], 99.50th=[ 96], 99.90th=[ 108], 99.95th=[ 120], 00:24:44.198 | 99.99th=[ 121] 00:24:44.198 bw ( KiB/s): min= 760, max= 2368, per=4.25%, avg=1192.00, stdev=318.54, samples=20 00:24:44.198 iops : min= 190, max= 592, avg=298.00, stdev=79.64, samples=20 00:24:44.198 lat (msec) : 10=0.93%, 20=3.20%, 50=37.88%, 100=57.78%, 250=0.20% 00:24:44.198 cpu : usr=42.05%, sys=2.51%, ctx=1193, majf=0, minf=9 00:24:44.198 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=82.2%, 16=16.5%, 32=0.0%, >=64=0.0% 00:24:44.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.198 complete : 0=0.0%, 4=87.7%, 8=12.1%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.198 issued rwts: total=2996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.198 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:44.198 filename1: (groupid=0, jobs=1): err= 0: pid=83207: Mon Dec 9 09:35:19 2024 00:24:44.198 read: IOPS=285, BW=1144KiB/s (1171kB/s)(11.2MiB/10031msec) 00:24:44.198 slat (usec): min=5, max=8047, avg=34.21, stdev=395.64 00:24:44.198 clat (msec): min=7, max=107, avg=55.77, stdev=16.80 00:24:44.198 lat (msec): min=8, max=107, avg=55.81, stdev=16.80 00:24:44.198 clat percentiles (msec): 00:24:44.198 | 1.00th=[ 18], 5.00th=[ 28], 10.00th=[ 36], 20.00th=[ 46], 00:24:44.198 | 30.00th=[ 48], 40.00th=[ 51], 50.00th=[ 58], 60.00th=[ 61], 00:24:44.198 | 70.00th=[ 61], 80.00th=[ 69], 90.00th=[ 84], 95.00th=[ 86], 00:24:44.198 | 99.00th=[ 96], 99.50th=[ 96], 99.90th=[ 100], 99.95th=[ 100], 00:24:44.198 | 99.99th=[ 108] 00:24:44.198 bw ( KiB/s): min= 816, max= 1800, per=4.07%, avg=1140.80, stdev=208.67, samples=20 00:24:44.198 iops : min= 204, max= 450, avg=285.20, stdev=52.17, samples=20 00:24:44.198 lat (msec) : 10=0.07%, 20=1.05%, 50=39.23%, 100=59.62%, 250=0.03% 00:24:44.198 cpu : usr=31.27%, sys=1.93%, ctx=854, majf=0, minf=9 00:24:44.198 IO depths : 1=0.1%, 2=0.5%, 4=1.8%, 8=80.9%, 16=16.7%, 32=0.0%, >=64=0.0% 00:24:44.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.198 complete : 0=0.0%, 4=88.2%, 8=11.4%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.198 issued rwts: total=2868,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.198 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:44.198 filename1: (groupid=0, jobs=1): err= 0: pid=83208: Mon Dec 9 09:35:19 2024 00:24:44.198 read: IOPS=286, BW=1146KiB/s (1174kB/s)(11.2MiB/10022msec) 00:24:44.198 slat (usec): min=6, max=8030, avg=25.60, stdev=298.91 00:24:44.198 clat (msec): min=11, max=110, avg=55.70, stdev=16.82 00:24:44.198 lat (msec): min=11, max=110, avg=55.73, stdev=16.82 00:24:44.198 clat percentiles (msec): 00:24:44.198 | 1.00th=[ 20], 5.00th=[ 26], 10.00th=[ 36], 20.00th=[ 45], 00:24:44.198 | 30.00th=[ 48], 40.00th=[ 50], 50.00th=[ 58], 60.00th=[ 61], 00:24:44.198 | 70.00th=[ 61], 80.00th=[ 69], 90.00th=[ 84], 95.00th=[ 85], 00:24:44.198 | 99.00th=[ 96], 99.50th=[ 96], 99.90th=[ 108], 99.95th=[ 108], 00:24:44.198 | 99.99th=[ 111] 00:24:44.198 bw ( KiB/s): min= 800, max= 1784, per=4.08%, avg=1142.80, stdev=209.86, samples=20 00:24:44.198 iops : min= 200, max= 446, avg=285.70, stdev=52.46, samples=20 00:24:44.198 lat (msec) : 20=1.11%, 50=39.45%, 100=59.33%, 250=0.10% 00:24:44.198 cpu : usr=31.23%, sys=1.98%, ctx=864, majf=0, minf=9 00:24:44.198 IO depths : 1=0.1%, 2=1.0%, 4=3.9%, 8=79.0%, 16=16.1%, 32=0.0%, >=64=0.0% 00:24:44.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.198 complete : 0=0.0%, 4=88.6%, 8=10.6%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.198 issued rwts: total=2872,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.198 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:44.198 filename1: (groupid=0, jobs=1): err= 0: pid=83209: Mon Dec 9 09:35:19 2024 00:24:44.198 read: IOPS=293, BW=1176KiB/s (1204kB/s)(11.5MiB/10028msec) 00:24:44.198 slat (usec): min=5, max=4025, avg=18.74, stdev=133.28 00:24:44.198 clat (msec): min=6, max=119, avg=54.30, stdev=17.81 00:24:44.198 lat (msec): min=6, max=119, avg=54.32, stdev=17.81 00:24:44.198 clat percentiles (msec): 00:24:44.198 | 1.00th=[ 14], 5.00th=[ 31], 10.00th=[ 34], 20.00th=[ 39], 00:24:44.198 | 30.00th=[ 45], 40.00th=[ 50], 50.00th=[ 55], 60.00th=[ 57], 00:24:44.198 | 70.00th=[ 61], 80.00th=[ 67], 90.00th=[ 84], 95.00th=[ 87], 00:24:44.198 | 99.00th=[ 96], 99.50th=[ 100], 99.90th=[ 115], 99.95th=[ 121], 00:24:44.198 | 99.99th=[ 121] 00:24:44.198 bw ( KiB/s): min= 736, max= 1944, per=4.19%, avg=1175.20, stdev=257.36, samples=20 00:24:44.198 iops : min= 184, max= 486, avg=293.80, stdev=64.34, samples=20 00:24:44.198 lat (msec) : 10=0.07%, 20=2.31%, 50=38.94%, 100=58.45%, 250=0.24% 00:24:44.198 cpu : usr=38.11%, sys=2.34%, ctx=1548, majf=0, minf=9 00:24:44.198 IO depths : 1=0.1%, 2=0.3%, 4=1.3%, 8=81.9%, 16=16.4%, 32=0.0%, >=64=0.0% 00:24:44.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.198 complete : 0=0.0%, 4=87.8%, 8=11.9%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.198 issued rwts: total=2948,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.198 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:44.198 filename1: (groupid=0, jobs=1): err= 0: pid=83210: Mon Dec 9 09:35:19 2024 00:24:44.198 read: IOPS=297, BW=1189KiB/s (1218kB/s)(11.6MiB/10004msec) 00:24:44.198 slat (usec): min=2, max=8030, avg=28.96, stdev=328.34 00:24:44.198 clat (msec): min=4, max=107, avg=53.68, stdev=16.99 00:24:44.198 lat (msec): min=4, max=107, avg=53.71, stdev=17.00 00:24:44.198 clat percentiles (msec): 00:24:44.198 | 1.00th=[ 11], 5.00th=[ 27], 10.00th=[ 36], 20.00th=[ 37], 00:24:44.198 | 30.00th=[ 47], 40.00th=[ 49], 50.00th=[ 55], 60.00th=[ 59], 00:24:44.198 | 70.00th=[ 61], 80.00th=[ 63], 90.00th=[ 80], 95.00th=[ 87], 00:24:44.198 | 99.00th=[ 96], 99.50th=[ 96], 99.90th=[ 101], 99.95th=[ 101], 00:24:44.198 | 99.99th=[ 108] 00:24:44.198 bw ( KiB/s): min= 848, max= 1392, per=4.14%, avg=1161.68, stdev=162.96, samples=19 00:24:44.198 iops : min= 212, max= 348, avg=290.42, stdev=40.74, samples=19 00:24:44.198 lat (msec) : 10=1.01%, 20=0.57%, 50=42.30%, 100=56.09%, 250=0.03% 00:24:44.198 cpu : usr=31.27%, sys=2.04%, ctx=881, majf=0, minf=9 00:24:44.198 IO depths : 1=0.1%, 2=0.9%, 4=3.5%, 8=80.1%, 16=15.5%, 32=0.0%, >=64=0.0% 00:24:44.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.198 complete : 0=0.0%, 4=87.9%, 8=11.4%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.198 issued rwts: total=2974,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.198 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:44.198 filename1: (groupid=0, jobs=1): err= 0: pid=83211: Mon Dec 9 09:35:19 2024 00:24:44.198 read: IOPS=292, BW=1169KiB/s (1197kB/s)(11.4MiB/10006msec) 00:24:44.198 slat (usec): min=5, max=10036, avg=26.23, stdev=269.56 00:24:44.198 clat (msec): min=10, max=107, avg=54.64, stdev=16.14 00:24:44.198 lat (msec): min=10, max=107, avg=54.66, stdev=16.15 00:24:44.198 clat percentiles (msec): 00:24:44.198 | 1.00th=[ 24], 5.00th=[ 32], 10.00th=[ 36], 20.00th=[ 40], 00:24:44.198 | 30.00th=[ 46], 40.00th=[ 52], 50.00th=[ 55], 60.00th=[ 58], 00:24:44.198 | 70.00th=[ 61], 80.00th=[ 65], 90.00th=[ 81], 95.00th=[ 87], 00:24:44.198 | 99.00th=[ 94], 99.50th=[ 97], 99.90th=[ 100], 99.95th=[ 103], 00:24:44.198 | 99.99th=[ 108] 00:24:44.198 bw ( KiB/s): min= 816, max= 1542, per=4.12%, avg=1154.00, stdev=178.98, samples=19 00:24:44.198 iops : min= 204, max= 385, avg=288.47, stdev=44.68, samples=19 00:24:44.198 lat (msec) : 20=0.21%, 50=36.76%, 100=62.96%, 250=0.07% 00:24:44.198 cpu : usr=40.26%, sys=2.48%, ctx=1549, majf=0, minf=9 00:24:44.198 IO depths : 1=0.1%, 2=0.4%, 4=1.6%, 8=81.6%, 16=16.3%, 32=0.0%, >=64=0.0% 00:24:44.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.198 complete : 0=0.0%, 4=87.8%, 8=11.8%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.198 issued rwts: total=2924,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.198 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:44.198 filename1: (groupid=0, jobs=1): err= 0: pid=83212: Mon Dec 9 09:35:19 2024 00:24:44.198 read: IOPS=281, BW=1128KiB/s (1155kB/s)(11.0MiB/10025msec) 00:24:44.198 slat (usec): min=3, max=8039, avg=19.73, stdev=213.29 00:24:44.198 clat (msec): min=2, max=119, avg=56.63, stdev=18.64 00:24:44.198 lat (msec): min=2, max=120, avg=56.65, stdev=18.64 00:24:44.198 clat percentiles (msec): 00:24:44.198 | 1.00th=[ 7], 5.00th=[ 24], 10.00th=[ 36], 20.00th=[ 47], 00:24:44.198 | 30.00th=[ 48], 40.00th=[ 52], 50.00th=[ 59], 60.00th=[ 61], 00:24:44.198 | 70.00th=[ 61], 80.00th=[ 70], 90.00th=[ 85], 95.00th=[ 87], 00:24:44.198 | 99.00th=[ 96], 99.50th=[ 108], 99.90th=[ 121], 99.95th=[ 121], 00:24:44.198 | 99.99th=[ 121] 00:24:44.198 bw ( KiB/s): min= 760, max= 2248, per=4.02%, avg=1126.40, stdev=296.11, samples=20 00:24:44.198 iops : min= 190, max= 562, avg=281.60, stdev=74.03, samples=20 00:24:44.198 lat (msec) : 4=0.07%, 10=2.16%, 20=1.63%, 50=34.08%, 100=61.54% 00:24:44.198 lat (msec) : 250=0.53% 00:24:44.198 cpu : usr=31.25%, sys=2.23%, ctx=887, majf=0, minf=9 00:24:44.198 IO depths : 1=0.1%, 2=0.8%, 4=3.1%, 8=79.2%, 16=16.8%, 32=0.0%, >=64=0.0% 00:24:44.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.198 complete : 0=0.0%, 4=88.9%, 8=10.4%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.198 issued rwts: total=2826,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.198 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:44.198 filename1: (groupid=0, jobs=1): err= 0: pid=83213: Mon Dec 9 09:35:19 2024 00:24:44.198 read: IOPS=296, BW=1185KiB/s (1214kB/s)(11.6MiB/10006msec) 00:24:44.198 slat (usec): min=2, max=6798, avg=24.69, stdev=206.64 00:24:44.198 clat (msec): min=6, max=100, avg=53.88, stdev=16.24 00:24:44.198 lat (msec): min=6, max=100, avg=53.91, stdev=16.25 00:24:44.198 clat percentiles (msec): 00:24:44.198 | 1.00th=[ 22], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 39], 00:24:44.198 | 30.00th=[ 45], 40.00th=[ 49], 50.00th=[ 54], 60.00th=[ 58], 00:24:44.198 | 70.00th=[ 61], 80.00th=[ 64], 90.00th=[ 80], 95.00th=[ 86], 00:24:44.198 | 99.00th=[ 95], 99.50th=[ 96], 99.90th=[ 101], 99.95th=[ 101], 00:24:44.198 | 99.99th=[ 101] 00:24:44.198 bw ( KiB/s): min= 864, max= 1346, per=4.15%, avg=1164.74, stdev=158.35, samples=19 00:24:44.198 iops : min= 216, max= 336, avg=291.16, stdev=39.55, samples=19 00:24:44.198 lat (msec) : 10=0.40%, 20=0.54%, 50=41.65%, 100=57.30%, 250=0.10% 00:24:44.198 cpu : usr=38.91%, sys=2.52%, ctx=1136, majf=0, minf=9 00:24:44.198 IO depths : 1=0.1%, 2=0.4%, 4=1.4%, 8=82.2%, 16=15.9%, 32=0.0%, >=64=0.0% 00:24:44.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.198 complete : 0=0.0%, 4=87.4%, 8=12.3%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.198 issued rwts: total=2965,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.199 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:44.199 filename1: (groupid=0, jobs=1): err= 0: pid=83214: Mon Dec 9 09:35:19 2024 00:24:44.199 read: IOPS=290, BW=1161KiB/s (1189kB/s)(11.3MiB/10002msec) 00:24:44.199 slat (usec): min=2, max=8026, avg=27.21, stdev=258.54 00:24:44.199 clat (usec): min=3142, max=99326, avg=54998.20, stdev=16573.14 00:24:44.199 lat (usec): min=3148, max=99345, avg=55025.41, stdev=16571.93 00:24:44.199 clat percentiles (usec): 00:24:44.199 | 1.00th=[ 8586], 5.00th=[31851], 10.00th=[35390], 20.00th=[39584], 00:24:44.199 | 30.00th=[46924], 40.00th=[50594], 50.00th=[55313], 60.00th=[58459], 00:24:44.199 | 70.00th=[60556], 80.00th=[65274], 90.00th=[81265], 95.00th=[86508], 00:24:44.199 | 99.00th=[92799], 99.50th=[94897], 99.90th=[99091], 99.95th=[99091], 00:24:44.199 | 99.99th=[99091] 00:24:44.199 bw ( KiB/s): min= 824, max= 1280, per=4.06%, avg=1137.68, stdev=158.38, samples=19 00:24:44.199 iops : min= 206, max= 320, avg=284.42, stdev=39.60, samples=19 00:24:44.199 lat (msec) : 4=0.31%, 10=0.90%, 20=0.31%, 50=37.00%, 100=61.49% 00:24:44.199 cpu : usr=39.10%, sys=2.36%, ctx=1241, majf=0, minf=9 00:24:44.199 IO depths : 1=0.1%, 2=1.0%, 4=3.9%, 8=79.3%, 16=15.7%, 32=0.0%, >=64=0.0% 00:24:44.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.199 complete : 0=0.0%, 4=88.2%, 8=10.9%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.199 issued rwts: total=2903,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.199 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:44.199 filename2: (groupid=0, jobs=1): err= 0: pid=83215: Mon Dec 9 09:35:19 2024 00:24:44.199 read: IOPS=296, BW=1186KiB/s (1215kB/s)(11.6MiB/10022msec) 00:24:44.199 slat (usec): min=5, max=4020, avg=24.99, stdev=189.09 00:24:44.199 clat (msec): min=16, max=105, avg=53.83, stdev=15.59 00:24:44.199 lat (msec): min=16, max=105, avg=53.86, stdev=15.59 00:24:44.199 clat percentiles (msec): 00:24:44.199 | 1.00th=[ 25], 5.00th=[ 32], 10.00th=[ 35], 20.00th=[ 40], 00:24:44.199 | 30.00th=[ 45], 40.00th=[ 49], 50.00th=[ 55], 60.00th=[ 56], 00:24:44.199 | 70.00th=[ 61], 80.00th=[ 64], 90.00th=[ 79], 95.00th=[ 87], 00:24:44.199 | 99.00th=[ 94], 99.50th=[ 96], 99.90th=[ 99], 99.95th=[ 99], 00:24:44.199 | 99.99th=[ 106] 00:24:44.199 bw ( KiB/s): min= 840, max= 1452, per=4.19%, avg=1173.26, stdev=167.71, samples=19 00:24:44.199 iops : min= 210, max= 363, avg=293.32, stdev=41.93, samples=19 00:24:44.199 lat (msec) : 20=0.20%, 50=42.46%, 100=57.30%, 250=0.03% 00:24:44.199 cpu : usr=41.48%, sys=2.82%, ctx=1272, majf=0, minf=9 00:24:44.199 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=82.6%, 16=16.0%, 32=0.0%, >=64=0.0% 00:24:44.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.199 complete : 0=0.0%, 4=87.3%, 8=12.4%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.199 issued rwts: total=2972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.199 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:44.199 filename2: (groupid=0, jobs=1): err= 0: pid=83216: Mon Dec 9 09:35:19 2024 00:24:44.199 read: IOPS=293, BW=1172KiB/s (1200kB/s)(11.5MiB/10030msec) 00:24:44.199 slat (usec): min=3, max=4043, avg=16.76, stdev=105.13 00:24:44.199 clat (msec): min=9, max=117, avg=54.48, stdev=17.65 00:24:44.199 lat (msec): min=9, max=117, avg=54.50, stdev=17.65 00:24:44.199 clat percentiles (msec): 00:24:44.199 | 1.00th=[ 16], 5.00th=[ 27], 10.00th=[ 34], 20.00th=[ 40], 00:24:44.199 | 30.00th=[ 46], 40.00th=[ 50], 50.00th=[ 55], 60.00th=[ 57], 00:24:44.199 | 70.00th=[ 61], 80.00th=[ 66], 90.00th=[ 83], 95.00th=[ 88], 00:24:44.199 | 99.00th=[ 96], 99.50th=[ 102], 99.90th=[ 114], 99.95th=[ 116], 00:24:44.199 | 99.99th=[ 118] 00:24:44.199 bw ( KiB/s): min= 760, max= 1968, per=4.18%, avg=1170.40, stdev=262.86, samples=20 00:24:44.199 iops : min= 190, max= 492, avg=292.60, stdev=65.71, samples=20 00:24:44.199 lat (msec) : 10=0.37%, 20=0.82%, 50=41.00%, 100=57.20%, 250=0.61% 00:24:44.199 cpu : usr=40.40%, sys=2.58%, ctx=1263, majf=0, minf=9 00:24:44.199 IO depths : 1=0.1%, 2=0.3%, 4=1.1%, 8=82.1%, 16=16.5%, 32=0.0%, >=64=0.0% 00:24:44.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.199 complete : 0=0.0%, 4=87.7%, 8=12.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.199 issued rwts: total=2939,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.199 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:44.199 filename2: (groupid=0, jobs=1): err= 0: pid=83217: Mon Dec 9 09:35:19 2024 00:24:44.199 read: IOPS=293, BW=1176KiB/s (1204kB/s)(11.5MiB/10005msec) 00:24:44.199 slat (usec): min=2, max=5042, avg=22.72, stdev=171.01 00:24:44.199 clat (msec): min=8, max=115, avg=54.32, stdev=17.03 00:24:44.199 lat (msec): min=8, max=115, avg=54.35, stdev=17.03 00:24:44.199 clat percentiles (msec): 00:24:44.199 | 1.00th=[ 23], 5.00th=[ 32], 10.00th=[ 35], 20.00th=[ 40], 00:24:44.199 | 30.00th=[ 47], 40.00th=[ 50], 50.00th=[ 55], 60.00th=[ 57], 00:24:44.199 | 70.00th=[ 61], 80.00th=[ 65], 90.00th=[ 82], 95.00th=[ 88], 00:24:44.199 | 99.00th=[ 96], 99.50th=[ 99], 99.90th=[ 116], 99.95th=[ 116], 00:24:44.199 | 99.99th=[ 116] 00:24:44.199 bw ( KiB/s): min= 816, max= 1532, per=4.14%, avg=1159.79, stdev=181.37, samples=19 00:24:44.199 iops : min= 204, max= 383, avg=289.95, stdev=45.34, samples=19 00:24:44.199 lat (msec) : 10=0.34%, 20=0.58%, 50=41.04%, 100=57.60%, 250=0.44% 00:24:44.199 cpu : usr=41.76%, sys=2.54%, ctx=1283, majf=0, minf=9 00:24:44.199 IO depths : 1=0.1%, 2=0.6%, 4=2.3%, 8=81.1%, 16=16.0%, 32=0.0%, >=64=0.0% 00:24:44.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.199 complete : 0=0.0%, 4=87.8%, 8=11.7%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.199 issued rwts: total=2941,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.199 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:44.199 filename2: (groupid=0, jobs=1): err= 0: pid=83218: Mon Dec 9 09:35:19 2024 00:24:44.199 read: IOPS=298, BW=1194KiB/s (1223kB/s)(11.7MiB/10040msec) 00:24:44.199 slat (usec): min=5, max=9022, avg=24.90, stdev=264.38 00:24:44.199 clat (msec): min=2, max=108, avg=53.42, stdev=19.86 00:24:44.199 lat (msec): min=2, max=108, avg=53.45, stdev=19.86 00:24:44.199 clat percentiles (msec): 00:24:44.199 | 1.00th=[ 3], 5.00th=[ 16], 10.00th=[ 31], 20.00th=[ 38], 00:24:44.199 | 30.00th=[ 46], 40.00th=[ 50], 50.00th=[ 56], 60.00th=[ 58], 00:24:44.199 | 70.00th=[ 61], 80.00th=[ 68], 90.00th=[ 84], 95.00th=[ 88], 00:24:44.199 | 99.00th=[ 95], 99.50th=[ 96], 99.90th=[ 108], 99.95th=[ 108], 00:24:44.199 | 99.99th=[ 109] 00:24:44.199 bw ( KiB/s): min= 784, max= 2896, per=4.26%, avg=1194.70, stdev=432.87, samples=20 00:24:44.199 iops : min= 196, max= 724, avg=298.65, stdev=108.22, samples=20 00:24:44.199 lat (msec) : 4=1.13%, 10=2.60%, 20=2.47%, 50=34.86%, 100=58.74% 00:24:44.199 lat (msec) : 250=0.20% 00:24:44.199 cpu : usr=35.84%, sys=2.33%, ctx=1171, majf=0, minf=9 00:24:44.199 IO depths : 1=0.2%, 2=0.5%, 4=1.2%, 8=81.4%, 16=16.7%, 32=0.0%, >=64=0.0% 00:24:44.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.199 complete : 0=0.0%, 4=88.0%, 8=11.7%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.199 issued rwts: total=2998,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.199 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:44.199 filename2: (groupid=0, jobs=1): err= 0: pid=83219: Mon Dec 9 09:35:19 2024 00:24:44.199 read: IOPS=310, BW=1240KiB/s (1270kB/s)(12.1MiB/10001msec) 00:24:44.199 slat (usec): min=5, max=8037, avg=28.88, stdev=287.69 00:24:44.199 clat (usec): min=1014, max=102250, avg=51485.57, stdev=19103.02 00:24:44.199 lat (usec): min=1020, max=102279, avg=51514.45, stdev=19111.82 00:24:44.199 clat percentiles (usec): 00:24:44.199 | 1.00th=[ 1172], 5.00th=[ 11207], 10.00th=[ 31851], 20.00th=[ 39060], 00:24:44.199 | 30.00th=[ 41681], 40.00th=[ 47973], 50.00th=[ 54264], 60.00th=[ 55837], 00:24:44.199 | 70.00th=[ 58983], 80.00th=[ 63701], 90.00th=[ 77071], 95.00th=[ 85459], 00:24:44.199 | 99.00th=[ 94897], 99.50th=[ 95945], 99.90th=[100140], 99.95th=[102237], 00:24:44.199 | 99.99th=[102237] 00:24:44.199 bw ( KiB/s): min= 864, max= 1432, per=4.17%, avg=1168.42, stdev=171.22, samples=19 00:24:44.199 iops : min= 216, max= 358, avg=292.11, stdev=42.80, samples=19 00:24:44.199 lat (msec) : 2=2.97%, 4=1.13%, 10=0.84%, 20=0.26%, 50=40.31% 00:24:44.199 lat (msec) : 100=54.27%, 250=0.23% 00:24:44.199 cpu : usr=39.96%, sys=2.79%, ctx=1403, majf=0, minf=9 00:24:44.199 IO depths : 1=0.2%, 2=0.9%, 4=2.9%, 8=80.6%, 16=15.4%, 32=0.0%, >=64=0.0% 00:24:44.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.199 complete : 0=0.0%, 4=87.7%, 8=11.6%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.199 issued rwts: total=3101,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.199 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:44.199 filename2: (groupid=0, jobs=1): err= 0: pid=83220: Mon Dec 9 09:35:19 2024 00:24:44.199 read: IOPS=289, BW=1159KiB/s (1187kB/s)(11.3MiB/10009msec) 00:24:44.199 slat (usec): min=2, max=8031, avg=26.46, stdev=288.80 00:24:44.199 clat (msec): min=15, max=119, avg=55.10, stdev=16.21 00:24:44.199 lat (msec): min=15, max=119, avg=55.13, stdev=16.22 00:24:44.199 clat percentiles (msec): 00:24:44.199 | 1.00th=[ 24], 5.00th=[ 31], 10.00th=[ 36], 20.00th=[ 40], 00:24:44.199 | 30.00th=[ 48], 40.00th=[ 50], 50.00th=[ 57], 60.00th=[ 60], 00:24:44.199 | 70.00th=[ 61], 80.00th=[ 64], 90.00th=[ 83], 95.00th=[ 85], 00:24:44.199 | 99.00th=[ 95], 99.50th=[ 96], 99.90th=[ 108], 99.95th=[ 108], 00:24:44.199 | 99.99th=[ 121] 00:24:44.199 bw ( KiB/s): min= 864, max= 1520, per=4.09%, avg=1146.95, stdev=174.81, samples=19 00:24:44.199 iops : min= 216, max= 380, avg=286.74, stdev=43.70, samples=19 00:24:44.199 lat (msec) : 20=0.17%, 50=42.34%, 100=57.31%, 250=0.17% 00:24:44.199 cpu : usr=31.44%, sys=1.82%, ctx=882, majf=0, minf=9 00:24:44.199 IO depths : 1=0.1%, 2=0.4%, 4=1.5%, 8=81.7%, 16=16.4%, 32=0.0%, >=64=0.0% 00:24:44.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.199 complete : 0=0.0%, 4=87.8%, 8=11.9%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.199 issued rwts: total=2900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.199 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:44.199 filename2: (groupid=0, jobs=1): err= 0: pid=83221: Mon Dec 9 09:35:19 2024 00:24:44.199 read: IOPS=293, BW=1175KiB/s (1203kB/s)(11.5MiB/10008msec) 00:24:44.199 slat (usec): min=2, max=9035, avg=29.61, stdev=339.03 00:24:44.199 clat (msec): min=8, max=108, avg=54.36, stdev=16.48 00:24:44.199 lat (msec): min=8, max=108, avg=54.39, stdev=16.49 00:24:44.199 clat percentiles (msec): 00:24:44.199 | 1.00th=[ 22], 5.00th=[ 28], 10.00th=[ 36], 20.00th=[ 39], 00:24:44.199 | 30.00th=[ 48], 40.00th=[ 48], 50.00th=[ 57], 60.00th=[ 60], 00:24:44.199 | 70.00th=[ 61], 80.00th=[ 62], 90.00th=[ 83], 95.00th=[ 85], 00:24:44.199 | 99.00th=[ 96], 99.50th=[ 96], 99.90th=[ 109], 99.95th=[ 109], 00:24:44.199 | 99.99th=[ 109] 00:24:44.200 bw ( KiB/s): min= 864, max= 1507, per=4.13%, avg=1158.89, stdev=172.48, samples=19 00:24:44.200 iops : min= 216, max= 376, avg=289.68, stdev=43.04, samples=19 00:24:44.200 lat (msec) : 10=0.31%, 20=0.65%, 50=43.59%, 100=55.26%, 250=0.20% 00:24:44.200 cpu : usr=31.43%, sys=1.89%, ctx=854, majf=0, minf=9 00:24:44.200 IO depths : 1=0.1%, 2=0.4%, 4=1.5%, 8=81.9%, 16=16.3%, 32=0.0%, >=64=0.0% 00:24:44.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.200 complete : 0=0.0%, 4=87.7%, 8=12.0%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.200 issued rwts: total=2939,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.200 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:44.200 filename2: (groupid=0, jobs=1): err= 0: pid=83223: Mon Dec 9 09:35:19 2024 00:24:44.200 read: IOPS=284, BW=1139KiB/s (1166kB/s)(11.1MiB/10027msec) 00:24:44.200 slat (usec): min=4, max=8019, avg=26.86, stdev=283.13 00:24:44.200 clat (msec): min=4, max=120, avg=56.05, stdev=18.01 00:24:44.200 lat (msec): min=4, max=125, avg=56.08, stdev=18.02 00:24:44.200 clat percentiles (msec): 00:24:44.200 | 1.00th=[ 7], 5.00th=[ 24], 10.00th=[ 36], 20.00th=[ 46], 00:24:44.200 | 30.00th=[ 48], 40.00th=[ 54], 50.00th=[ 56], 60.00th=[ 60], 00:24:44.200 | 70.00th=[ 62], 80.00th=[ 69], 90.00th=[ 83], 95.00th=[ 88], 00:24:44.200 | 99.00th=[ 96], 99.50th=[ 100], 99.90th=[ 118], 99.95th=[ 120], 00:24:44.200 | 99.99th=[ 122] 00:24:44.200 bw ( KiB/s): min= 752, max= 2232, per=4.06%, avg=1137.60, stdev=293.62, samples=20 00:24:44.200 iops : min= 188, max= 558, avg=284.40, stdev=73.41, samples=20 00:24:44.200 lat (msec) : 10=1.68%, 20=1.89%, 50=30.73%, 100=65.38%, 250=0.32% 00:24:44.200 cpu : usr=35.21%, sys=2.47%, ctx=1038, majf=0, minf=9 00:24:44.200 IO depths : 1=0.1%, 2=0.6%, 4=2.3%, 8=80.1%, 16=17.0%, 32=0.0%, >=64=0.0% 00:24:44.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.200 complete : 0=0.0%, 4=88.6%, 8=10.8%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.200 issued rwts: total=2854,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.200 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:44.200 00:24:44.200 Run status group 0 (all jobs): 00:24:44.200 READ: bw=27.4MiB/s (28.7MB/s), 1072KiB/s-1240KiB/s (1097kB/s-1270kB/s), io=275MiB (288MB), run=10001-10045msec 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:44.200 bdev_null0 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:44.200 [2024-12-09 09:35:20.137994] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:44.200 bdev_null1 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:24:44.200 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:44.201 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:44.201 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:44.201 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:24:44.201 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:44.201 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:44.201 09:35:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:24:44.201 09:35:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:24:44.201 09:35:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:44.201 09:35:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:44.201 { 00:24:44.201 "params": { 00:24:44.201 "name": "Nvme$subsystem", 00:24:44.201 "trtype": "$TEST_TRANSPORT", 00:24:44.201 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:44.201 "adrfam": "ipv4", 00:24:44.201 "trsvcid": "$NVMF_PORT", 00:24:44.201 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:44.201 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:44.201 "hdgst": ${hdgst:-false}, 00:24:44.201 "ddgst": ${ddgst:-false} 00:24:44.201 }, 00:24:44.201 "method": "bdev_nvme_attach_controller" 00:24:44.201 } 00:24:44.201 EOF 00:24:44.201 )") 00:24:44.201 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:44.201 09:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:24:44.201 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:24:44.201 09:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:44.201 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:44.201 09:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:24:44.201 09:35:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:24:44.201 09:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:24:44.201 09:35:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:44.201 09:35:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:44.201 09:35:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:44.201 { 00:24:44.201 "params": { 00:24:44.201 "name": "Nvme$subsystem", 00:24:44.201 "trtype": "$TEST_TRANSPORT", 00:24:44.201 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:44.201 "adrfam": "ipv4", 00:24:44.201 "trsvcid": "$NVMF_PORT", 00:24:44.201 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:44.201 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:44.201 "hdgst": ${hdgst:-false}, 00:24:44.201 "ddgst": ${ddgst:-false} 00:24:44.201 }, 00:24:44.201 "method": "bdev_nvme_attach_controller" 00:24:44.201 } 00:24:44.201 EOF 00:24:44.201 )") 00:24:44.201 09:35:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:24:44.201 09:35:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:24:44.201 09:35:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:24:44.201 09:35:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:44.201 "params": { 00:24:44.201 "name": "Nvme0", 00:24:44.201 "trtype": "tcp", 00:24:44.201 "traddr": "10.0.0.3", 00:24:44.201 "adrfam": "ipv4", 00:24:44.201 "trsvcid": "4420", 00:24:44.201 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:44.201 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:44.201 "hdgst": false, 00:24:44.201 "ddgst": false 00:24:44.201 }, 00:24:44.201 "method": "bdev_nvme_attach_controller" 00:24:44.201 },{ 00:24:44.201 "params": { 00:24:44.201 "name": "Nvme1", 00:24:44.201 "trtype": "tcp", 00:24:44.201 "traddr": "10.0.0.3", 00:24:44.201 "adrfam": "ipv4", 00:24:44.201 "trsvcid": "4420", 00:24:44.201 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:44.201 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:44.201 "hdgst": false, 00:24:44.201 "ddgst": false 00:24:44.201 }, 00:24:44.201 "method": "bdev_nvme_attach_controller" 00:24:44.201 }' 00:24:44.201 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:44.201 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:44.201 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:44.201 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:44.201 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:44.201 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:44.201 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:44.201 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:44.201 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:44.201 09:35:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:44.201 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:24:44.201 ... 00:24:44.201 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:24:44.201 ... 00:24:44.201 fio-3.35 00:24:44.201 Starting 4 threads 00:24:48.498 00:24:48.498 filename0: (groupid=0, jobs=1): err= 0: pid=83356: Mon Dec 9 09:35:26 2024 00:24:48.498 read: IOPS=2772, BW=21.7MiB/s (22.7MB/s)(108MiB/5001msec) 00:24:48.498 slat (nsec): min=5880, max=55232, avg=11081.36, stdev=3428.77 00:24:48.498 clat (usec): min=740, max=5462, avg=2854.08, stdev=743.51 00:24:48.498 lat (usec): min=747, max=5475, avg=2865.16, stdev=744.14 00:24:48.498 clat percentiles (usec): 00:24:48.498 | 1.00th=[ 1516], 5.00th=[ 1696], 10.00th=[ 1713], 20.00th=[ 1926], 00:24:48.498 | 30.00th=[ 2311], 40.00th=[ 3032], 50.00th=[ 3228], 60.00th=[ 3359], 00:24:48.498 | 70.00th=[ 3425], 80.00th=[ 3490], 90.00th=[ 3556], 95.00th=[ 3654], 00:24:48.498 | 99.00th=[ 3785], 99.50th=[ 3818], 99.90th=[ 4047], 99.95th=[ 4080], 00:24:48.498 | 99.99th=[ 5211] 00:24:48.498 bw ( KiB/s): min=18048, max=24816, per=25.61%, avg=21900.89, stdev=2783.67, samples=9 00:24:48.498 iops : min= 2256, max= 3102, avg=2737.56, stdev=348.04, samples=9 00:24:48.498 lat (usec) : 750=0.01%, 1000=0.12% 00:24:48.498 lat (msec) : 2=24.93%, 4=74.75%, 10=0.19% 00:24:48.498 cpu : usr=90.46%, sys=8.86%, ctx=6, majf=0, minf=0 00:24:48.498 IO depths : 1=0.1%, 2=8.1%, 4=59.4%, 8=32.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:48.498 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:48.498 complete : 0=0.0%, 4=97.0%, 8=3.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:48.498 issued rwts: total=13865,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:48.498 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:48.498 filename0: (groupid=0, jobs=1): err= 0: pid=83357: Mon Dec 9 09:35:26 2024 00:24:48.498 read: IOPS=2606, BW=20.4MiB/s (21.4MB/s)(102MiB/5002msec) 00:24:48.498 slat (nsec): min=5913, max=60731, avg=11379.81, stdev=3620.71 00:24:48.498 clat (usec): min=587, max=5468, avg=3032.82, stdev=806.38 00:24:48.498 lat (usec): min=595, max=5481, avg=3044.20, stdev=806.65 00:24:48.498 clat percentiles (usec): 00:24:48.498 | 1.00th=[ 1045], 5.00th=[ 1090], 10.00th=[ 1696], 20.00th=[ 2311], 00:24:48.498 | 30.00th=[ 3032], 40.00th=[ 3228], 50.00th=[ 3425], 60.00th=[ 3490], 00:24:48.498 | 70.00th=[ 3490], 80.00th=[ 3523], 90.00th=[ 3720], 95.00th=[ 3818], 00:24:48.499 | 99.00th=[ 4621], 99.50th=[ 4686], 99.90th=[ 4752], 99.95th=[ 4948], 00:24:48.499 | 99.99th=[ 5407] 00:24:48.499 bw ( KiB/s): min=18032, max=26704, per=24.76%, avg=21171.56, stdev=3360.75, samples=9 00:24:48.499 iops : min= 2254, max= 3338, avg=2646.44, stdev=420.09, samples=9 00:24:48.499 lat (usec) : 750=0.05%, 1000=0.25% 00:24:48.499 lat (msec) : 2=16.29%, 4=81.43%, 10=1.97% 00:24:48.499 cpu : usr=89.48%, sys=9.82%, ctx=7, majf=0, minf=1 00:24:48.499 IO depths : 1=0.1%, 2=12.6%, 4=56.8%, 8=30.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:48.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:48.499 complete : 0=0.0%, 4=95.2%, 8=4.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:48.499 issued rwts: total=13037,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:48.499 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:48.499 filename1: (groupid=0, jobs=1): err= 0: pid=83358: Mon Dec 9 09:35:26 2024 00:24:48.499 read: IOPS=2582, BW=20.2MiB/s (21.2MB/s)(101MiB/5001msec) 00:24:48.499 slat (nsec): min=5976, max=66128, avg=12708.33, stdev=2848.56 00:24:48.499 clat (usec): min=896, max=5359, avg=3057.97, stdev=674.33 00:24:48.499 lat (usec): min=903, max=5371, avg=3070.68, stdev=674.75 00:24:48.499 clat percentiles (usec): 00:24:48.499 | 1.00th=[ 1352], 5.00th=[ 1663], 10.00th=[ 1745], 20.00th=[ 2311], 00:24:48.499 | 30.00th=[ 3163], 40.00th=[ 3228], 50.00th=[ 3392], 60.00th=[ 3458], 00:24:48.499 | 70.00th=[ 3490], 80.00th=[ 3523], 90.00th=[ 3556], 95.00th=[ 3654], 00:24:48.499 | 99.00th=[ 3785], 99.50th=[ 3818], 99.90th=[ 4113], 99.95th=[ 4178], 00:24:48.499 | 99.99th=[ 5276] 00:24:48.499 bw ( KiB/s): min=18048, max=24816, per=24.52%, avg=20974.11, stdev=2728.80, samples=9 00:24:48.499 iops : min= 2256, max= 3102, avg=2621.67, stdev=340.97, samples=9 00:24:48.499 lat (usec) : 1000=0.15% 00:24:48.499 lat (msec) : 2=13.98%, 4=85.51%, 10=0.36% 00:24:48.499 cpu : usr=90.44%, sys=8.94%, ctx=5, majf=0, minf=0 00:24:48.499 IO depths : 1=0.1%, 2=13.6%, 4=56.3%, 8=30.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:48.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:48.499 complete : 0=0.0%, 4=94.8%, 8=5.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:48.499 issued rwts: total=12914,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:48.499 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:48.499 filename1: (groupid=0, jobs=1): err= 0: pid=83359: Mon Dec 9 09:35:26 2024 00:24:48.499 read: IOPS=2730, BW=21.3MiB/s (22.4MB/s)(107MiB/5002msec) 00:24:48.499 slat (nsec): min=5991, max=65442, avg=11261.68, stdev=3583.03 00:24:48.499 clat (usec): min=582, max=5245, avg=2895.72, stdev=751.07 00:24:48.499 lat (usec): min=589, max=5257, avg=2906.99, stdev=750.72 00:24:48.499 clat percentiles (usec): 00:24:48.499 | 1.00th=[ 1565], 5.00th=[ 1680], 10.00th=[ 1696], 20.00th=[ 1942], 00:24:48.499 | 30.00th=[ 2311], 40.00th=[ 3163], 50.00th=[ 3228], 60.00th=[ 3392], 00:24:48.499 | 70.00th=[ 3458], 80.00th=[ 3490], 90.00th=[ 3556], 95.00th=[ 3720], 00:24:48.499 | 99.00th=[ 3851], 99.50th=[ 4015], 99.90th=[ 4752], 99.95th=[ 4752], 00:24:48.499 | 99.99th=[ 5014] 00:24:48.499 bw ( KiB/s): min=18048, max=24816, per=25.19%, avg=21543.11, stdev=2733.80, samples=9 00:24:48.499 iops : min= 2256, max= 3102, avg=2692.89, stdev=341.73, samples=9 00:24:48.499 lat (usec) : 750=0.04%, 1000=0.08% 00:24:48.499 lat (msec) : 2=24.26%, 4=75.05%, 10=0.56% 00:24:48.499 cpu : usr=89.86%, sys=9.42%, ctx=12, majf=0, minf=0 00:24:48.499 IO depths : 1=0.1%, 2=9.2%, 4=58.7%, 8=32.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:48.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:48.499 complete : 0=0.0%, 4=96.5%, 8=3.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:48.499 issued rwts: total=13656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:48.499 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:48.499 00:24:48.499 Run status group 0 (all jobs): 00:24:48.499 READ: bw=83.5MiB/s (87.6MB/s), 20.2MiB/s-21.7MiB/s (21.2MB/s-22.7MB/s), io=418MiB (438MB), run=5001-5002msec 00:24:48.758 09:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:24:48.758 09:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:24:48.758 09:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:48.758 09:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:48.758 09:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:24:48.758 09:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:48.758 09:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.758 09:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:48.758 09:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.758 09:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:48.758 09:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.758 09:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:48.758 09:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.759 09:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:48.759 09:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:24:48.759 09:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:24:48.759 09:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:48.759 09:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.759 09:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:48.759 09:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.759 09:35:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:24:48.759 09:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.759 09:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:48.759 09:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.759 00:24:48.759 real 0m23.662s 00:24:48.759 user 2m2.313s 00:24:48.759 sys 0m10.249s 00:24:48.759 09:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:48.759 09:35:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:48.759 ************************************ 00:24:48.759 END TEST fio_dif_rand_params 00:24:48.759 ************************************ 00:24:48.759 09:35:26 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:24:48.759 09:35:26 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:48.759 09:35:26 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:48.759 09:35:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:48.759 ************************************ 00:24:48.759 START TEST fio_dif_digest 00:24:48.759 ************************************ 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:48.759 bdev_null0 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:48.759 [2024-12-09 09:35:26.384695] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:48.759 { 00:24:48.759 "params": { 00:24:48.759 "name": "Nvme$subsystem", 00:24:48.759 "trtype": "$TEST_TRANSPORT", 00:24:48.759 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:48.759 "adrfam": "ipv4", 00:24:48.759 "trsvcid": "$NVMF_PORT", 00:24:48.759 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:48.759 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:48.759 "hdgst": ${hdgst:-false}, 00:24:48.759 "ddgst": ${ddgst:-false} 00:24:48.759 }, 00:24:48.759 "method": "bdev_nvme_attach_controller" 00:24:48.759 } 00:24:48.759 EOF 00:24:48.759 )") 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:48.759 "params": { 00:24:48.759 "name": "Nvme0", 00:24:48.759 "trtype": "tcp", 00:24:48.759 "traddr": "10.0.0.3", 00:24:48.759 "adrfam": "ipv4", 00:24:48.759 "trsvcid": "4420", 00:24:48.759 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:48.759 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:48.759 "hdgst": true, 00:24:48.759 "ddgst": true 00:24:48.759 }, 00:24:48.759 "method": "bdev_nvme_attach_controller" 00:24:48.759 }' 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:48.759 09:35:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:49.018 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:24:49.018 ... 00:24:49.018 fio-3.35 00:24:49.018 Starting 3 threads 00:25:01.221 00:25:01.221 filename0: (groupid=0, jobs=1): err= 0: pid=83465: Mon Dec 9 09:35:37 2024 00:25:01.221 read: IOPS=284, BW=35.5MiB/s (37.2MB/s)(356MiB/10007msec) 00:25:01.221 slat (nsec): min=4027, max=69514, avg=12797.60, stdev=8237.98 00:25:01.221 clat (usec): min=8338, max=12414, avg=10522.59, stdev=150.85 00:25:01.221 lat (usec): min=8342, max=12441, avg=10535.39, stdev=152.11 00:25:01.221 clat percentiles (usec): 00:25:01.221 | 1.00th=[10421], 5.00th=[10421], 10.00th=[10421], 20.00th=[10421], 00:25:01.221 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10552], 60.00th=[10552], 00:25:01.221 | 70.00th=[10552], 80.00th=[10552], 90.00th=[10683], 95.00th=[10683], 00:25:01.221 | 99.00th=[11076], 99.50th=[11207], 99.90th=[12387], 99.95th=[12387], 00:25:01.221 | 99.99th=[12387] 00:25:01.221 bw ( KiB/s): min=36096, max=36864, per=33.33%, avg=36378.95, stdev=380.62, samples=19 00:25:01.221 iops : min= 282, max= 288, avg=284.21, stdev= 2.97, samples=19 00:25:01.221 lat (msec) : 10=0.11%, 20=99.89% 00:25:01.221 cpu : usr=95.23%, sys=4.31%, ctx=175, majf=0, minf=0 00:25:01.221 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:01.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:01.221 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:01.221 issued rwts: total=2844,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:01.221 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:01.221 filename0: (groupid=0, jobs=1): err= 0: pid=83466: Mon Dec 9 09:35:37 2024 00:25:01.221 read: IOPS=284, BW=35.5MiB/s (37.2MB/s)(356MiB/10007msec) 00:25:01.221 slat (nsec): min=6049, max=76875, avg=12623.68, stdev=7989.79 00:25:01.221 clat (usec): min=7554, max=12591, avg=10522.89, stdev=198.47 00:25:01.221 lat (usec): min=7561, max=12637, avg=10535.51, stdev=199.43 00:25:01.221 clat percentiles (usec): 00:25:01.221 | 1.00th=[10421], 5.00th=[10421], 10.00th=[10421], 20.00th=[10421], 00:25:01.221 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10552], 60.00th=[10552], 00:25:01.221 | 70.00th=[10552], 80.00th=[10552], 90.00th=[10552], 95.00th=[10683], 00:25:01.221 | 99.00th=[11076], 99.50th=[11207], 99.90th=[12518], 99.95th=[12518], 00:25:01.221 | 99.99th=[12649] 00:25:01.221 bw ( KiB/s): min=36096, max=36864, per=33.33%, avg=36378.95, stdev=380.62, samples=19 00:25:01.221 iops : min= 282, max= 288, avg=284.21, stdev= 2.97, samples=19 00:25:01.221 lat (msec) : 10=0.21%, 20=99.79% 00:25:01.221 cpu : usr=89.13%, sys=10.35%, ctx=13, majf=0, minf=0 00:25:01.221 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:01.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:01.221 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:01.221 issued rwts: total=2844,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:01.221 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:01.221 filename0: (groupid=0, jobs=1): err= 0: pid=83467: Mon Dec 9 09:35:37 2024 00:25:01.221 read: IOPS=284, BW=35.5MiB/s (37.2MB/s)(356MiB/10007msec) 00:25:01.221 slat (nsec): min=6036, max=36274, avg=10021.64, stdev=3942.47 00:25:01.221 clat (usec): min=7612, max=13422, avg=10530.95, stdev=197.36 00:25:01.221 lat (usec): min=7646, max=13449, avg=10540.97, stdev=197.56 00:25:01.221 clat percentiles (usec): 00:25:01.221 | 1.00th=[10421], 5.00th=[10421], 10.00th=[10421], 20.00th=[10421], 00:25:01.221 | 30.00th=[10552], 40.00th=[10552], 50.00th=[10552], 60.00th=[10552], 00:25:01.221 | 70.00th=[10552], 80.00th=[10552], 90.00th=[10552], 95.00th=[10814], 00:25:01.221 | 99.00th=[11076], 99.50th=[11338], 99.90th=[13435], 99.95th=[13435], 00:25:01.221 | 99.99th=[13435] 00:25:01.221 bw ( KiB/s): min=36096, max=36864, per=33.33%, avg=36378.95, stdev=380.62, samples=19 00:25:01.221 iops : min= 282, max= 288, avg=284.21, stdev= 2.97, samples=19 00:25:01.221 lat (msec) : 10=0.32%, 20=99.68% 00:25:01.221 cpu : usr=94.85%, sys=4.67%, ctx=428, majf=0, minf=0 00:25:01.221 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:01.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:01.221 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:01.221 issued rwts: total=2844,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:01.221 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:01.221 00:25:01.221 Run status group 0 (all jobs): 00:25:01.221 READ: bw=107MiB/s (112MB/s), 35.5MiB/s-35.5MiB/s (37.2MB/s-37.2MB/s), io=1067MiB (1118MB), run=10007-10007msec 00:25:01.221 09:35:37 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:25:01.221 09:35:37 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:25:01.221 09:35:37 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:25:01.221 09:35:37 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:01.221 09:35:37 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:25:01.221 09:35:37 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:01.221 09:35:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.221 09:35:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:01.221 09:35:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.221 09:35:37 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:01.221 09:35:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.221 09:35:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:01.221 09:35:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.221 00:25:01.221 real 0m11.026s 00:25:01.221 user 0m28.598s 00:25:01.221 sys 0m2.229s 00:25:01.221 09:35:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:01.221 ************************************ 00:25:01.221 END TEST fio_dif_digest 00:25:01.221 09:35:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:01.221 ************************************ 00:25:01.221 09:35:37 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:25:01.221 09:35:37 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:25:01.221 09:35:37 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:01.221 09:35:37 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:25:01.221 09:35:37 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:01.221 09:35:37 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:25:01.221 09:35:37 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:01.221 09:35:37 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:01.221 rmmod nvme_tcp 00:25:01.221 rmmod nvme_fabrics 00:25:01.221 rmmod nvme_keyring 00:25:01.221 09:35:37 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:01.221 09:35:37 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:25:01.221 09:35:37 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:25:01.221 09:35:37 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 82706 ']' 00:25:01.221 09:35:37 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 82706 00:25:01.221 09:35:37 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 82706 ']' 00:25:01.221 09:35:37 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 82706 00:25:01.221 09:35:37 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:25:01.221 09:35:37 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:01.221 09:35:37 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82706 00:25:01.221 09:35:37 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:01.221 09:35:37 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:01.221 killing process with pid 82706 00:25:01.221 09:35:37 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82706' 00:25:01.221 09:35:37 nvmf_dif -- common/autotest_common.sh@973 -- # kill 82706 00:25:01.221 09:35:37 nvmf_dif -- common/autotest_common.sh@978 -- # wait 82706 00:25:01.221 09:35:37 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:25:01.221 09:35:37 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:01.221 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:01.221 Waiting for block devices as requested 00:25:01.221 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:01.221 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:01.221 09:35:38 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:01.221 09:35:38 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:01.221 09:35:38 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:25:01.221 09:35:38 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:25:01.221 09:35:38 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:01.221 09:35:38 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:25:01.221 09:35:38 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:01.221 09:35:38 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:01.221 09:35:38 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:01.221 09:35:38 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:01.221 09:35:38 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:01.221 09:35:38 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:01.221 09:35:38 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:01.221 09:35:38 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:01.221 09:35:38 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:01.221 09:35:38 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:01.221 09:35:38 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:01.221 09:35:38 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:01.221 09:35:38 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:01.221 09:35:38 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:01.221 09:35:38 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:01.221 09:35:38 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:01.221 09:35:38 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:01.222 09:35:38 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:01.222 09:35:38 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:01.479 09:35:38 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:25:01.479 00:25:01.479 real 1m1.048s 00:25:01.479 user 3m47.266s 00:25:01.479 sys 0m22.686s 00:25:01.479 09:35:38 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:01.479 ************************************ 00:25:01.479 END TEST nvmf_dif 00:25:01.479 ************************************ 00:25:01.479 09:35:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:01.479 09:35:39 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:25:01.479 09:35:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:01.479 09:35:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:01.479 09:35:39 -- common/autotest_common.sh@10 -- # set +x 00:25:01.479 ************************************ 00:25:01.479 START TEST nvmf_abort_qd_sizes 00:25:01.479 ************************************ 00:25:01.479 09:35:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:25:01.479 * Looking for test storage... 00:25:01.479 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:01.479 09:35:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:01.479 09:35:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:25:01.479 09:35:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:01.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:01.737 --rc genhtml_branch_coverage=1 00:25:01.737 --rc genhtml_function_coverage=1 00:25:01.737 --rc genhtml_legend=1 00:25:01.737 --rc geninfo_all_blocks=1 00:25:01.737 --rc geninfo_unexecuted_blocks=1 00:25:01.737 00:25:01.737 ' 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:01.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:01.737 --rc genhtml_branch_coverage=1 00:25:01.737 --rc genhtml_function_coverage=1 00:25:01.737 --rc genhtml_legend=1 00:25:01.737 --rc geninfo_all_blocks=1 00:25:01.737 --rc geninfo_unexecuted_blocks=1 00:25:01.737 00:25:01.737 ' 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:01.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:01.737 --rc genhtml_branch_coverage=1 00:25:01.737 --rc genhtml_function_coverage=1 00:25:01.737 --rc genhtml_legend=1 00:25:01.737 --rc geninfo_all_blocks=1 00:25:01.737 --rc geninfo_unexecuted_blocks=1 00:25:01.737 00:25:01.737 ' 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:01.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:01.737 --rc genhtml_branch_coverage=1 00:25:01.737 --rc genhtml_function_coverage=1 00:25:01.737 --rc genhtml_legend=1 00:25:01.737 --rc geninfo_all_blocks=1 00:25:01.737 --rc geninfo_unexecuted_blocks=1 00:25:01.737 00:25:01.737 ' 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:01.737 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:01.737 09:35:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:01.738 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:25:01.738 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:25:01.738 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:25:01.738 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:25:01.738 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:25:01.738 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:25:01.738 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:01.738 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:01.738 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:01.738 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:01.738 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:01.738 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:01.738 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:01.738 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:01.738 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:01.738 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:01.738 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:01.738 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:01.738 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:01.738 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:01.738 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:01.738 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:01.738 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:01.738 Cannot find device "nvmf_init_br" 00:25:01.738 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:25:01.738 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:01.738 Cannot find device "nvmf_init_br2" 00:25:01.738 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:25:01.738 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:01.738 Cannot find device "nvmf_tgt_br" 00:25:01.738 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:25:01.738 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:01.738 Cannot find device "nvmf_tgt_br2" 00:25:01.738 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:25:01.738 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:01.738 Cannot find device "nvmf_init_br" 00:25:01.738 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:25:01.738 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:01.738 Cannot find device "nvmf_init_br2" 00:25:01.738 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:25:01.738 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:01.738 Cannot find device "nvmf_tgt_br" 00:25:01.738 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:25:01.738 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:01.738 Cannot find device "nvmf_tgt_br2" 00:25:01.738 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:25:01.738 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:01.996 Cannot find device "nvmf_br" 00:25:01.996 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:25:01.996 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:01.996 Cannot find device "nvmf_init_if" 00:25:01.996 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:25:01.996 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:01.996 Cannot find device "nvmf_init_if2" 00:25:01.996 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:25:01.996 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:01.996 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:01.996 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:25:01.996 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:01.996 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:01.996 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:25:01.996 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:01.996 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:01.996 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:01.996 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:01.996 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:01.996 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:01.996 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:01.996 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:01.996 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:01.996 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:01.996 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:01.996 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:01.996 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:01.996 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:01.996 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:01.996 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:01.996 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:01.996 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:01.996 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:01.997 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:01.997 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:01.997 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:01.997 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:01.997 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:01.997 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:02.255 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:02.255 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:02.255 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:02.255 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:02.255 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:02.255 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:02.255 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:02.255 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:02.255 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:02.255 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.141 ms 00:25:02.255 00:25:02.255 --- 10.0.0.3 ping statistics --- 00:25:02.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:02.255 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:25:02.255 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:02.255 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:02.255 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:25:02.255 00:25:02.255 --- 10.0.0.4 ping statistics --- 00:25:02.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:02.255 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:25:02.255 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:02.255 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:02.255 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:25:02.255 00:25:02.255 --- 10.0.0.1 ping statistics --- 00:25:02.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:02.255 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:25:02.255 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:02.255 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:02.255 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:25:02.255 00:25:02.255 --- 10.0.0.2 ping statistics --- 00:25:02.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:02.255 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:25:02.255 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:02.255 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:25:02.255 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:25:02.255 09:35:39 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:03.190 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:03.190 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:25:03.190 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:25:03.190 09:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:03.190 09:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:03.190 09:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:03.190 09:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:03.190 09:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:03.190 09:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:03.190 09:35:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:25:03.190 09:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:03.190 09:35:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:03.190 09:35:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:03.191 09:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=84130 00:25:03.191 09:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 84130 00:25:03.191 09:35:40 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:25:03.191 09:35:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 84130 ']' 00:25:03.191 09:35:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:03.191 09:35:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:03.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:03.191 09:35:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:03.191 09:35:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:03.191 09:35:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:03.455 [2024-12-09 09:35:40.950714] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:25:03.455 [2024-12-09 09:35:40.950784] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:03.455 [2024-12-09 09:35:41.105335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:03.455 [2024-12-09 09:35:41.154634] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:03.455 [2024-12-09 09:35:41.154682] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:03.455 [2024-12-09 09:35:41.154692] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:03.455 [2024-12-09 09:35:41.154700] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:03.455 [2024-12-09 09:35:41.154707] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:03.455 [2024-12-09 09:35:41.156581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:03.455 [2024-12-09 09:35:41.156688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:03.455 [2024-12-09 09:35:41.156882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:03.455 [2024-12-09 09:35:41.156885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:03.717 [2024-12-09 09:35:41.199610] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:04.285 09:35:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:04.285 ************************************ 00:25:04.285 START TEST spdk_target_abort 00:25:04.285 ************************************ 00:25:04.285 09:35:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:25:04.285 09:35:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:25:04.285 09:35:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:25:04.286 09:35:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.286 09:35:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:04.286 spdk_targetn1 00:25:04.286 09:35:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.286 09:35:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:04.286 09:35:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.286 09:35:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:04.544 [2024-12-09 09:35:42.010323] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:04.544 09:35:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.544 09:35:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:25:04.544 09:35:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.544 09:35:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:04.544 09:35:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.544 09:35:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:25:04.544 09:35:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.544 09:35:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:04.544 09:35:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.544 09:35:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:25:04.544 09:35:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.544 09:35:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:04.544 [2024-12-09 09:35:42.052971] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:04.544 09:35:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.544 09:35:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:25:04.544 09:35:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:25:04.544 09:35:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:25:04.544 09:35:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:25:04.544 09:35:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:25:04.544 09:35:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:25:04.545 09:35:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:25:04.545 09:35:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:25:04.545 09:35:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:25:04.545 09:35:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:04.545 09:35:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:25:04.545 09:35:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:04.545 09:35:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:25:04.545 09:35:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:04.545 09:35:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:25:04.545 09:35:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:04.545 09:35:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:25:04.545 09:35:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:04.545 09:35:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:04.545 09:35:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:04.545 09:35:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:07.863 Initializing NVMe Controllers 00:25:07.863 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:25:07.863 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:07.863 Initialization complete. Launching workers. 00:25:07.863 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11598, failed: 0 00:25:07.863 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1050, failed to submit 10548 00:25:07.863 success 814, unsuccessful 236, failed 0 00:25:07.863 09:35:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:07.863 09:35:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:11.143 Initializing NVMe Controllers 00:25:11.143 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:25:11.143 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:11.143 Initialization complete. Launching workers. 00:25:11.143 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8960, failed: 0 00:25:11.143 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1204, failed to submit 7756 00:25:11.143 success 344, unsuccessful 860, failed 0 00:25:11.143 09:35:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:11.143 09:35:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:14.429 Initializing NVMe Controllers 00:25:14.429 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:25:14.429 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:14.429 Initialization complete. Launching workers. 00:25:14.429 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31569, failed: 0 00:25:14.429 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2345, failed to submit 29224 00:25:14.429 success 494, unsuccessful 1851, failed 0 00:25:14.429 09:35:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:25:14.429 09:35:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.429 09:35:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:14.429 09:35:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.429 09:35:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:25:14.429 09:35:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.430 09:35:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:15.010 09:35:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.010 09:35:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 84130 00:25:15.010 09:35:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 84130 ']' 00:25:15.010 09:35:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 84130 00:25:15.010 09:35:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:25:15.010 09:35:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:15.010 09:35:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84130 00:25:15.010 09:35:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:15.010 09:35:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:15.010 killing process with pid 84130 00:25:15.010 09:35:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84130' 00:25:15.010 09:35:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 84130 00:25:15.010 09:35:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 84130 00:25:15.269 00:25:15.269 real 0m10.841s 00:25:15.269 user 0m43.188s 00:25:15.269 sys 0m2.388s 00:25:15.269 09:35:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:15.269 09:35:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:15.269 ************************************ 00:25:15.269 END TEST spdk_target_abort 00:25:15.269 ************************************ 00:25:15.269 09:35:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:25:15.269 09:35:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:15.269 09:35:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:15.269 09:35:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:15.269 ************************************ 00:25:15.269 START TEST kernel_target_abort 00:25:15.269 ************************************ 00:25:15.269 09:35:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:25:15.269 09:35:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:25:15.269 09:35:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:25:15.269 09:35:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:15.269 09:35:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:15.269 09:35:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.269 09:35:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.269 09:35:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:15.269 09:35:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.269 09:35:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:15.269 09:35:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:15.269 09:35:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:15.269 09:35:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:15.269 09:35:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:15.269 09:35:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:15.269 09:35:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:15.269 09:35:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:15.269 09:35:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:15.269 09:35:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:25:15.269 09:35:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:15.269 09:35:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:15.269 09:35:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:15.269 09:35:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:15.838 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:15.838 Waiting for block devices as requested 00:25:15.838 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:16.097 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:16.097 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:16.097 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:16.097 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:16.097 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:16.097 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:16.097 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:16.097 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:16.097 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:16.097 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:25:16.097 No valid GPT data, bailing 00:25:16.097 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:16.097 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:25:16.097 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:25:16.097 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:16.097 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:16.098 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:25:16.098 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:25:16.098 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:25:16.098 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:25:16.098 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:16.098 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:25:16.098 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:25:16.098 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:25:16.098 No valid GPT data, bailing 00:25:16.098 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:25:16.357 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:25:16.357 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:25:16.357 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:25:16.357 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:16.357 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:25:16.357 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:25:16.357 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:25:16.357 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:25:16.357 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:16.357 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:25:16.357 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:25:16.357 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:25:16.357 No valid GPT data, bailing 00:25:16.357 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:25:16.357 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:25:16.357 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:25:16.357 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:25:16.357 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:16.357 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:25:16.357 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:25:16.357 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:25:16.357 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:25:16.357 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:16.357 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:25:16.357 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:25:16.357 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:25:16.357 No valid GPT data, bailing 00:25:16.357 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:25:16.357 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:25:16.357 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:25:16.357 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:25:16.357 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:25:16.357 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:16.357 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:16.357 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:16.357 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:16.357 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:25:16.357 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:25:16.357 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:25:16.357 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:16.358 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:25:16.358 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:25:16.358 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:25:16.358 09:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:16.358 09:35:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 --hostid=e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 -a 10.0.0.1 -t tcp -s 4420 00:25:16.358 00:25:16.358 Discovery Log Number of Records 2, Generation counter 2 00:25:16.358 =====Discovery Log Entry 0====== 00:25:16.358 trtype: tcp 00:25:16.358 adrfam: ipv4 00:25:16.358 subtype: current discovery subsystem 00:25:16.358 treq: not specified, sq flow control disable supported 00:25:16.358 portid: 1 00:25:16.358 trsvcid: 4420 00:25:16.358 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:16.358 traddr: 10.0.0.1 00:25:16.358 eflags: none 00:25:16.358 sectype: none 00:25:16.358 =====Discovery Log Entry 1====== 00:25:16.358 trtype: tcp 00:25:16.358 adrfam: ipv4 00:25:16.358 subtype: nvme subsystem 00:25:16.358 treq: not specified, sq flow control disable supported 00:25:16.358 portid: 1 00:25:16.358 trsvcid: 4420 00:25:16.358 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:16.358 traddr: 10.0.0.1 00:25:16.358 eflags: none 00:25:16.358 sectype: none 00:25:16.358 09:35:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:25:16.358 09:35:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:25:16.358 09:35:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:25:16.358 09:35:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:25:16.358 09:35:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:25:16.358 09:35:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:25:16.358 09:35:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:25:16.358 09:35:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:25:16.358 09:35:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:25:16.358 09:35:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:16.358 09:35:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:25:16.358 09:35:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:16.358 09:35:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:25:16.358 09:35:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:16.358 09:35:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:25:16.358 09:35:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:16.358 09:35:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:25:16.358 09:35:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:16.358 09:35:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:16.358 09:35:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:16.358 09:35:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:19.641 Initializing NVMe Controllers 00:25:19.641 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:19.641 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:19.641 Initialization complete. Launching workers. 00:25:19.641 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 35842, failed: 0 00:25:19.641 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35842, failed to submit 0 00:25:19.641 success 0, unsuccessful 35842, failed 0 00:25:19.641 09:35:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:19.641 09:35:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:22.927 Initializing NVMe Controllers 00:25:22.927 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:22.927 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:22.927 Initialization complete. Launching workers. 00:25:22.927 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 75524, failed: 0 00:25:22.927 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 38043, failed to submit 37481 00:25:22.927 success 0, unsuccessful 38043, failed 0 00:25:22.927 09:36:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:22.927 09:36:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:26.213 Initializing NVMe Controllers 00:25:26.213 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:26.213 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:26.213 Initialization complete. Launching workers. 00:25:26.213 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 102579, failed: 0 00:25:26.213 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25662, failed to submit 76917 00:25:26.213 success 0, unsuccessful 25662, failed 0 00:25:26.213 09:36:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:25:26.213 09:36:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:26.213 09:36:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:25:26.213 09:36:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:26.213 09:36:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:26.213 09:36:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:26.213 09:36:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:26.213 09:36:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:26.213 09:36:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:26.213 09:36:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:27.149 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:29.677 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:25:29.677 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:25:29.677 00:25:29.677 real 0m14.468s 00:25:29.677 user 0m6.384s 00:25:29.677 sys 0m5.490s 00:25:29.677 09:36:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:29.677 09:36:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:29.677 ************************************ 00:25:29.677 END TEST kernel_target_abort 00:25:29.677 ************************************ 00:25:29.677 09:36:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:29.677 09:36:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:25:29.677 09:36:07 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:29.677 09:36:07 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:25:29.935 09:36:07 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:29.935 09:36:07 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:25:29.935 09:36:07 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:29.935 09:36:07 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:29.935 rmmod nvme_tcp 00:25:29.935 rmmod nvme_fabrics 00:25:29.935 rmmod nvme_keyring 00:25:29.935 09:36:07 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:29.935 09:36:07 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:25:29.935 09:36:07 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:25:29.935 09:36:07 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 84130 ']' 00:25:29.935 09:36:07 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 84130 00:25:29.935 09:36:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 84130 ']' 00:25:29.935 09:36:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 84130 00:25:29.935 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (84130) - No such process 00:25:29.935 Process with pid 84130 is not found 00:25:29.935 09:36:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 84130 is not found' 00:25:29.935 09:36:07 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:25:29.935 09:36:07 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:30.500 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:30.500 Waiting for block devices as requested 00:25:30.500 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:30.758 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:30.758 09:36:08 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:30.758 09:36:08 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:30.758 09:36:08 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:25:30.758 09:36:08 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:25:30.758 09:36:08 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:30.758 09:36:08 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:25:30.758 09:36:08 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:30.758 09:36:08 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:30.758 09:36:08 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:30.758 09:36:08 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:30.758 09:36:08 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:30.758 09:36:08 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:30.758 09:36:08 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:30.758 09:36:08 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:31.017 09:36:08 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:31.017 09:36:08 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:31.017 09:36:08 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:31.017 09:36:08 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:31.017 09:36:08 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:31.017 09:36:08 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:31.017 09:36:08 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:31.017 09:36:08 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:31.017 09:36:08 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:31.017 09:36:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:31.017 09:36:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:31.017 09:36:08 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:25:31.017 ************************************ 00:25:31.017 END TEST nvmf_abort_qd_sizes 00:25:31.017 ************************************ 00:25:31.017 00:25:31.017 real 0m29.654s 00:25:31.017 user 0m50.954s 00:25:31.017 sys 0m9.909s 00:25:31.017 09:36:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:31.017 09:36:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:31.277 09:36:08 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:25:31.277 09:36:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:31.277 09:36:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:31.277 09:36:08 -- common/autotest_common.sh@10 -- # set +x 00:25:31.277 ************************************ 00:25:31.277 START TEST keyring_file 00:25:31.277 ************************************ 00:25:31.277 09:36:08 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:25:31.277 * Looking for test storage... 00:25:31.277 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:25:31.277 09:36:08 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:31.277 09:36:08 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:25:31.277 09:36:08 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:31.277 09:36:08 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:31.277 09:36:08 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:31.277 09:36:08 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:31.277 09:36:08 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:31.277 09:36:08 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:25:31.277 09:36:08 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:25:31.277 09:36:08 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:25:31.277 09:36:08 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:25:31.277 09:36:08 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:25:31.277 09:36:08 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:25:31.277 09:36:08 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:25:31.277 09:36:08 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:31.277 09:36:08 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:25:31.277 09:36:08 keyring_file -- scripts/common.sh@345 -- # : 1 00:25:31.277 09:36:08 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:31.277 09:36:08 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:31.277 09:36:08 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:25:31.277 09:36:08 keyring_file -- scripts/common.sh@353 -- # local d=1 00:25:31.277 09:36:08 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:31.277 09:36:08 keyring_file -- scripts/common.sh@355 -- # echo 1 00:25:31.277 09:36:08 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:25:31.277 09:36:08 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:25:31.277 09:36:08 keyring_file -- scripts/common.sh@353 -- # local d=2 00:25:31.277 09:36:08 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:31.277 09:36:08 keyring_file -- scripts/common.sh@355 -- # echo 2 00:25:31.277 09:36:08 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:25:31.277 09:36:08 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:31.277 09:36:08 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:31.277 09:36:08 keyring_file -- scripts/common.sh@368 -- # return 0 00:25:31.277 09:36:08 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:31.277 09:36:08 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:31.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.277 --rc genhtml_branch_coverage=1 00:25:31.277 --rc genhtml_function_coverage=1 00:25:31.277 --rc genhtml_legend=1 00:25:31.277 --rc geninfo_all_blocks=1 00:25:31.277 --rc geninfo_unexecuted_blocks=1 00:25:31.277 00:25:31.277 ' 00:25:31.277 09:36:08 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:31.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.277 --rc genhtml_branch_coverage=1 00:25:31.277 --rc genhtml_function_coverage=1 00:25:31.277 --rc genhtml_legend=1 00:25:31.277 --rc geninfo_all_blocks=1 00:25:31.277 --rc geninfo_unexecuted_blocks=1 00:25:31.277 00:25:31.277 ' 00:25:31.277 09:36:08 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:31.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.277 --rc genhtml_branch_coverage=1 00:25:31.277 --rc genhtml_function_coverage=1 00:25:31.277 --rc genhtml_legend=1 00:25:31.277 --rc geninfo_all_blocks=1 00:25:31.277 --rc geninfo_unexecuted_blocks=1 00:25:31.277 00:25:31.277 ' 00:25:31.277 09:36:08 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:31.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.277 --rc genhtml_branch_coverage=1 00:25:31.277 --rc genhtml_function_coverage=1 00:25:31.277 --rc genhtml_legend=1 00:25:31.277 --rc geninfo_all_blocks=1 00:25:31.277 --rc geninfo_unexecuted_blocks=1 00:25:31.277 00:25:31.277 ' 00:25:31.277 09:36:08 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:25:31.277 09:36:08 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:31.277 09:36:08 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:25:31.277 09:36:08 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:31.277 09:36:08 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:31.277 09:36:08 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:31.277 09:36:08 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:31.277 09:36:08 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:31.277 09:36:08 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:31.277 09:36:08 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:31.277 09:36:08 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:31.277 09:36:08 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:31.277 09:36:08 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:31.277 09:36:08 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:25:31.277 09:36:08 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:25:31.277 09:36:08 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:31.277 09:36:08 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:31.537 09:36:08 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:31.537 09:36:08 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:31.537 09:36:08 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:31.537 09:36:08 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:25:31.537 09:36:09 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:31.537 09:36:09 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:31.537 09:36:09 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:31.537 09:36:09 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.537 09:36:09 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.537 09:36:09 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.537 09:36:09 keyring_file -- paths/export.sh@5 -- # export PATH 00:25:31.537 09:36:09 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.537 09:36:09 keyring_file -- nvmf/common.sh@51 -- # : 0 00:25:31.537 09:36:09 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:31.537 09:36:09 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:31.537 09:36:09 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:31.537 09:36:09 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:31.537 09:36:09 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:31.537 09:36:09 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:31.537 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:31.537 09:36:09 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:31.537 09:36:09 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:31.537 09:36:09 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:31.537 09:36:09 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:25:31.537 09:36:09 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:25:31.538 09:36:09 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:25:31.538 09:36:09 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:25:31.538 09:36:09 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:25:31.538 09:36:09 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:25:31.538 09:36:09 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:25:31.538 09:36:09 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:25:31.538 09:36:09 keyring_file -- keyring/common.sh@17 -- # name=key0 00:25:31.538 09:36:09 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:25:31.538 09:36:09 keyring_file -- keyring/common.sh@17 -- # digest=0 00:25:31.538 09:36:09 keyring_file -- keyring/common.sh@18 -- # mktemp 00:25:31.538 09:36:09 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.n8uGC3BSqv 00:25:31.538 09:36:09 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:25:31.538 09:36:09 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:25:31.538 09:36:09 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:25:31.538 09:36:09 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:25:31.538 09:36:09 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:25:31.538 09:36:09 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:25:31.538 09:36:09 keyring_file -- nvmf/common.sh@733 -- # python - 00:25:31.538 09:36:09 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.n8uGC3BSqv 00:25:31.538 09:36:09 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.n8uGC3BSqv 00:25:31.538 09:36:09 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.n8uGC3BSqv 00:25:31.538 09:36:09 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:25:31.538 09:36:09 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:25:31.538 09:36:09 keyring_file -- keyring/common.sh@17 -- # name=key1 00:25:31.538 09:36:09 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:25:31.538 09:36:09 keyring_file -- keyring/common.sh@17 -- # digest=0 00:25:31.538 09:36:09 keyring_file -- keyring/common.sh@18 -- # mktemp 00:25:31.538 09:36:09 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.QXoVEz3fcL 00:25:31.538 09:36:09 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:25:31.538 09:36:09 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:25:31.538 09:36:09 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:25:31.538 09:36:09 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:25:31.538 09:36:09 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:25:31.538 09:36:09 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:25:31.538 09:36:09 keyring_file -- nvmf/common.sh@733 -- # python - 00:25:31.538 09:36:09 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.QXoVEz3fcL 00:25:31.538 09:36:09 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.QXoVEz3fcL 00:25:31.538 09:36:09 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.QXoVEz3fcL 00:25:31.538 09:36:09 keyring_file -- keyring/file.sh@30 -- # tgtpid=85052 00:25:31.538 09:36:09 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:31.538 09:36:09 keyring_file -- keyring/file.sh@32 -- # waitforlisten 85052 00:25:31.538 09:36:09 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85052 ']' 00:25:31.538 09:36:09 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:31.538 09:36:09 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:31.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:31.538 09:36:09 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:31.538 09:36:09 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:31.538 09:36:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:31.538 [2024-12-09 09:36:09.198290] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:25:31.538 [2024-12-09 09:36:09.198368] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85052 ] 00:25:31.797 [2024-12-09 09:36:09.352240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.797 [2024-12-09 09:36:09.404404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:31.797 [2024-12-09 09:36:09.462863] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:32.732 09:36:10 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:32.732 09:36:10 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:25:32.732 09:36:10 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:25:32.732 09:36:10 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.732 09:36:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:32.732 [2024-12-09 09:36:10.108906] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:32.732 null0 00:25:32.732 [2024-12-09 09:36:10.140852] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:32.732 [2024-12-09 09:36:10.141034] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:32.732 09:36:10 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.732 09:36:10 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:25:32.732 09:36:10 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:25:32.732 09:36:10 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:25:32.732 09:36:10 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:32.732 09:36:10 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:32.732 09:36:10 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:32.732 09:36:10 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:32.732 09:36:10 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:25:32.732 09:36:10 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.732 09:36:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:32.732 [2024-12-09 09:36:10.172793] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:25:32.732 request: 00:25:32.732 { 00:25:32.732 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:25:32.732 "secure_channel": false, 00:25:32.732 "listen_address": { 00:25:32.732 "trtype": "tcp", 00:25:32.732 "traddr": "127.0.0.1", 00:25:32.732 "trsvcid": "4420" 00:25:32.732 }, 00:25:32.732 "method": "nvmf_subsystem_add_listener", 00:25:32.732 "req_id": 1 00:25:32.732 } 00:25:32.732 Got JSON-RPC error response 00:25:32.732 response: 00:25:32.732 { 00:25:32.732 "code": -32602, 00:25:32.732 "message": "Invalid parameters" 00:25:32.732 } 00:25:32.732 09:36:10 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:32.732 09:36:10 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:25:32.732 09:36:10 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:32.732 09:36:10 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:32.732 09:36:10 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:32.732 09:36:10 keyring_file -- keyring/file.sh@47 -- # bperfpid=85069 00:25:32.732 09:36:10 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:25:32.732 09:36:10 keyring_file -- keyring/file.sh@49 -- # waitforlisten 85069 /var/tmp/bperf.sock 00:25:32.732 09:36:10 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85069 ']' 00:25:32.732 09:36:10 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:32.732 09:36:10 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:32.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:32.732 09:36:10 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:32.732 09:36:10 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:32.732 09:36:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:32.732 [2024-12-09 09:36:10.236197] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:25:32.732 [2024-12-09 09:36:10.236271] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85069 ] 00:25:32.732 [2024-12-09 09:36:10.388844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.732 [2024-12-09 09:36:10.441717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:32.991 [2024-12-09 09:36:10.484996] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:33.558 09:36:11 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:33.558 09:36:11 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:25:33.558 09:36:11 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.n8uGC3BSqv 00:25:33.558 09:36:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.n8uGC3BSqv 00:25:33.817 09:36:11 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.QXoVEz3fcL 00:25:33.817 09:36:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.QXoVEz3fcL 00:25:34.076 09:36:11 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:25:34.076 09:36:11 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:25:34.076 09:36:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:34.076 09:36:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:34.076 09:36:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:34.348 09:36:11 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.n8uGC3BSqv == \/\t\m\p\/\t\m\p\.\n\8\u\G\C\3\B\S\q\v ]] 00:25:34.348 09:36:11 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:25:34.348 09:36:11 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:25:34.348 09:36:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:34.348 09:36:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:34.348 09:36:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:34.637 09:36:12 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.QXoVEz3fcL == \/\t\m\p\/\t\m\p\.\Q\X\o\V\E\z\3\f\c\L ]] 00:25:34.637 09:36:12 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:25:34.637 09:36:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:34.637 09:36:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:34.637 09:36:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:34.637 09:36:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:34.637 09:36:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:34.637 09:36:12 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:25:34.637 09:36:12 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:25:34.637 09:36:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:34.637 09:36:12 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:34.637 09:36:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:34.637 09:36:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:34.637 09:36:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:34.896 09:36:12 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:25:34.896 09:36:12 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:34.896 09:36:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:35.154 [2024-12-09 09:36:12.760754] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:35.154 nvme0n1 00:25:35.154 09:36:12 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:25:35.154 09:36:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:35.154 09:36:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:35.154 09:36:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:35.154 09:36:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:35.154 09:36:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:35.412 09:36:13 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:25:35.412 09:36:13 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:25:35.412 09:36:13 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:35.412 09:36:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:35.412 09:36:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:35.412 09:36:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:35.412 09:36:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:35.671 09:36:13 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:25:35.671 09:36:13 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:35.929 Running I/O for 1 seconds... 00:25:36.865 15838.00 IOPS, 61.87 MiB/s 00:25:36.865 Latency(us) 00:25:36.865 [2024-12-09T09:36:14.588Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:36.865 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:25:36.865 nvme0n1 : 1.00 15891.47 62.08 0.00 0.00 8038.81 2947.80 48217.65 00:25:36.865 [2024-12-09T09:36:14.588Z] =================================================================================================================== 00:25:36.865 [2024-12-09T09:36:14.588Z] Total : 15891.47 62.08 0.00 0.00 8038.81 2947.80 48217.65 00:25:36.865 { 00:25:36.865 "results": [ 00:25:36.865 { 00:25:36.865 "job": "nvme0n1", 00:25:36.865 "core_mask": "0x2", 00:25:36.865 "workload": "randrw", 00:25:36.865 "percentage": 50, 00:25:36.865 "status": "finished", 00:25:36.865 "queue_depth": 128, 00:25:36.865 "io_size": 4096, 00:25:36.865 "runtime": 1.00469, 00:25:36.865 "iops": 15891.469010341498, 00:25:36.865 "mibps": 62.076050821646476, 00:25:36.865 "io_failed": 0, 00:25:36.865 "io_timeout": 0, 00:25:36.865 "avg_latency_us": 8038.806479833903, 00:25:36.865 "min_latency_us": 2947.804016064257, 00:25:36.865 "max_latency_us": 48217.65140562249 00:25:36.865 } 00:25:36.865 ], 00:25:36.865 "core_count": 1 00:25:36.865 } 00:25:36.865 09:36:14 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:25:36.865 09:36:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:25:37.124 09:36:14 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:25:37.124 09:36:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:37.124 09:36:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:37.124 09:36:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:37.124 09:36:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:37.124 09:36:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:37.382 09:36:14 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:25:37.382 09:36:14 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:25:37.382 09:36:14 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:37.382 09:36:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:37.382 09:36:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:37.382 09:36:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:37.382 09:36:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:37.382 09:36:15 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:25:37.382 09:36:15 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:37.382 09:36:15 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:25:37.382 09:36:15 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:37.382 09:36:15 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:25:37.382 09:36:15 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:37.382 09:36:15 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:25:37.382 09:36:15 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:37.382 09:36:15 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:37.382 09:36:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:37.651 [2024-12-09 09:36:15.276134] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:37.651 [2024-12-09 09:36:15.276819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f705d0 (107): Transport endpoint is not connected 00:25:37.651 [2024-12-09 09:36:15.277809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f705d0 (9): Bad file descriptor 00:25:37.651 [2024-12-09 09:36:15.278806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:25:37.651 [2024-12-09 09:36:15.280400] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:25:37.651 [2024-12-09 09:36:15.280416] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:25:37.651 [2024-12-09 09:36:15.280429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:25:37.651 request: 00:25:37.651 { 00:25:37.651 "name": "nvme0", 00:25:37.651 "trtype": "tcp", 00:25:37.651 "traddr": "127.0.0.1", 00:25:37.651 "adrfam": "ipv4", 00:25:37.651 "trsvcid": "4420", 00:25:37.651 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:37.651 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:37.651 "prchk_reftag": false, 00:25:37.651 "prchk_guard": false, 00:25:37.651 "hdgst": false, 00:25:37.651 "ddgst": false, 00:25:37.651 "psk": "key1", 00:25:37.651 "allow_unrecognized_csi": false, 00:25:37.651 "method": "bdev_nvme_attach_controller", 00:25:37.652 "req_id": 1 00:25:37.652 } 00:25:37.652 Got JSON-RPC error response 00:25:37.652 response: 00:25:37.652 { 00:25:37.652 "code": -5, 00:25:37.652 "message": "Input/output error" 00:25:37.652 } 00:25:37.652 09:36:15 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:25:37.652 09:36:15 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:37.652 09:36:15 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:37.652 09:36:15 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:37.652 09:36:15 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:25:37.652 09:36:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:37.652 09:36:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:37.652 09:36:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:37.652 09:36:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:37.652 09:36:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:37.910 09:36:15 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:25:37.910 09:36:15 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:25:37.910 09:36:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:37.910 09:36:15 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:37.910 09:36:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:37.910 09:36:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:37.910 09:36:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:38.168 09:36:15 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:25:38.168 09:36:15 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:25:38.168 09:36:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:25:38.426 09:36:15 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:25:38.426 09:36:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:25:38.684 09:36:16 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:25:38.684 09:36:16 keyring_file -- keyring/file.sh@78 -- # jq length 00:25:38.684 09:36:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:38.684 09:36:16 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:25:38.684 09:36:16 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.n8uGC3BSqv 00:25:38.685 09:36:16 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.n8uGC3BSqv 00:25:38.685 09:36:16 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:25:38.685 09:36:16 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.n8uGC3BSqv 00:25:38.685 09:36:16 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:25:38.685 09:36:16 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:38.685 09:36:16 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:25:38.685 09:36:16 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:38.685 09:36:16 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.n8uGC3BSqv 00:25:38.685 09:36:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.n8uGC3BSqv 00:25:38.943 [2024-12-09 09:36:16.591114] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.n8uGC3BSqv': 0100660 00:25:38.943 [2024-12-09 09:36:16.591152] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:25:38.943 request: 00:25:38.943 { 00:25:38.943 "name": "key0", 00:25:38.943 "path": "/tmp/tmp.n8uGC3BSqv", 00:25:38.943 "method": "keyring_file_add_key", 00:25:38.943 "req_id": 1 00:25:38.943 } 00:25:38.943 Got JSON-RPC error response 00:25:38.943 response: 00:25:38.943 { 00:25:38.943 "code": -1, 00:25:38.943 "message": "Operation not permitted" 00:25:38.943 } 00:25:38.943 09:36:16 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:25:38.943 09:36:16 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:38.943 09:36:16 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:38.943 09:36:16 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:38.943 09:36:16 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.n8uGC3BSqv 00:25:38.943 09:36:16 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.n8uGC3BSqv 00:25:38.943 09:36:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.n8uGC3BSqv 00:25:39.201 09:36:16 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.n8uGC3BSqv 00:25:39.201 09:36:16 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:25:39.201 09:36:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:39.201 09:36:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:39.201 09:36:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:39.201 09:36:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:39.201 09:36:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:39.465 09:36:17 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:25:39.465 09:36:17 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:39.465 09:36:17 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:25:39.465 09:36:17 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:39.465 09:36:17 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:25:39.465 09:36:17 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:39.465 09:36:17 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:25:39.465 09:36:17 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:39.465 09:36:17 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:39.465 09:36:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:39.723 [2024-12-09 09:36:17.226219] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.n8uGC3BSqv': No such file or directory 00:25:39.723 [2024-12-09 09:36:17.226262] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:25:39.723 [2024-12-09 09:36:17.226281] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:25:39.723 [2024-12-09 09:36:17.226290] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:25:39.723 [2024-12-09 09:36:17.226299] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:39.723 [2024-12-09 09:36:17.226307] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:25:39.723 request: 00:25:39.723 { 00:25:39.723 "name": "nvme0", 00:25:39.723 "trtype": "tcp", 00:25:39.723 "traddr": "127.0.0.1", 00:25:39.723 "adrfam": "ipv4", 00:25:39.723 "trsvcid": "4420", 00:25:39.723 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:39.723 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:39.723 "prchk_reftag": false, 00:25:39.723 "prchk_guard": false, 00:25:39.723 "hdgst": false, 00:25:39.723 "ddgst": false, 00:25:39.723 "psk": "key0", 00:25:39.723 "allow_unrecognized_csi": false, 00:25:39.723 "method": "bdev_nvme_attach_controller", 00:25:39.723 "req_id": 1 00:25:39.723 } 00:25:39.723 Got JSON-RPC error response 00:25:39.723 response: 00:25:39.723 { 00:25:39.723 "code": -19, 00:25:39.723 "message": "No such device" 00:25:39.723 } 00:25:39.723 09:36:17 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:25:39.723 09:36:17 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:39.723 09:36:17 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:39.723 09:36:17 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:39.723 09:36:17 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:25:39.723 09:36:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:25:39.981 09:36:17 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:25:39.981 09:36:17 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:25:39.981 09:36:17 keyring_file -- keyring/common.sh@17 -- # name=key0 00:25:39.981 09:36:17 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:25:39.981 09:36:17 keyring_file -- keyring/common.sh@17 -- # digest=0 00:25:39.981 09:36:17 keyring_file -- keyring/common.sh@18 -- # mktemp 00:25:39.981 09:36:17 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.s2QIEhUoXG 00:25:39.981 09:36:17 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:25:39.981 09:36:17 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:25:39.981 09:36:17 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:25:39.981 09:36:17 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:25:39.981 09:36:17 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:25:39.981 09:36:17 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:25:39.981 09:36:17 keyring_file -- nvmf/common.sh@733 -- # python - 00:25:39.981 09:36:17 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.s2QIEhUoXG 00:25:39.981 09:36:17 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.s2QIEhUoXG 00:25:39.981 09:36:17 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.s2QIEhUoXG 00:25:39.981 09:36:17 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.s2QIEhUoXG 00:25:39.981 09:36:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.s2QIEhUoXG 00:25:40.239 09:36:17 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:40.240 09:36:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:40.496 nvme0n1 00:25:40.496 09:36:18 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:25:40.496 09:36:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:40.496 09:36:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:40.496 09:36:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:40.496 09:36:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:40.496 09:36:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:40.753 09:36:18 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:25:40.753 09:36:18 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:25:40.753 09:36:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:25:41.010 09:36:18 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:25:41.010 09:36:18 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:25:41.010 09:36:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:41.010 09:36:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:41.010 09:36:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:41.268 09:36:18 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:25:41.268 09:36:18 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:25:41.268 09:36:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:41.268 09:36:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:41.268 09:36:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:41.268 09:36:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:41.268 09:36:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:41.526 09:36:18 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:25:41.526 09:36:18 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:25:41.526 09:36:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:25:41.785 09:36:19 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:25:41.785 09:36:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:41.785 09:36:19 keyring_file -- keyring/file.sh@105 -- # jq length 00:25:41.785 09:36:19 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:25:41.785 09:36:19 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.s2QIEhUoXG 00:25:41.785 09:36:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.s2QIEhUoXG 00:25:42.043 09:36:19 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.QXoVEz3fcL 00:25:42.043 09:36:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.QXoVEz3fcL 00:25:42.301 09:36:19 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:42.301 09:36:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:42.560 nvme0n1 00:25:42.560 09:36:20 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:25:42.560 09:36:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:25:42.819 09:36:20 keyring_file -- keyring/file.sh@113 -- # config='{ 00:25:42.819 "subsystems": [ 00:25:42.819 { 00:25:42.819 "subsystem": "keyring", 00:25:42.819 "config": [ 00:25:42.819 { 00:25:42.819 "method": "keyring_file_add_key", 00:25:42.819 "params": { 00:25:42.819 "name": "key0", 00:25:42.819 "path": "/tmp/tmp.s2QIEhUoXG" 00:25:42.819 } 00:25:42.819 }, 00:25:42.819 { 00:25:42.819 "method": "keyring_file_add_key", 00:25:42.819 "params": { 00:25:42.819 "name": "key1", 00:25:42.819 "path": "/tmp/tmp.QXoVEz3fcL" 00:25:42.819 } 00:25:42.819 } 00:25:42.819 ] 00:25:42.819 }, 00:25:42.819 { 00:25:42.819 "subsystem": "iobuf", 00:25:42.819 "config": [ 00:25:42.819 { 00:25:42.819 "method": "iobuf_set_options", 00:25:42.819 "params": { 00:25:42.819 "small_pool_count": 8192, 00:25:42.819 "large_pool_count": 1024, 00:25:42.819 "small_bufsize": 8192, 00:25:42.819 "large_bufsize": 135168, 00:25:42.819 "enable_numa": false 00:25:42.819 } 00:25:42.819 } 00:25:42.819 ] 00:25:42.819 }, 00:25:42.819 { 00:25:42.819 "subsystem": "sock", 00:25:42.819 "config": [ 00:25:42.819 { 00:25:42.819 "method": "sock_set_default_impl", 00:25:42.819 "params": { 00:25:42.819 "impl_name": "uring" 00:25:42.819 } 00:25:42.819 }, 00:25:42.819 { 00:25:42.819 "method": "sock_impl_set_options", 00:25:42.819 "params": { 00:25:42.819 "impl_name": "ssl", 00:25:42.819 "recv_buf_size": 4096, 00:25:42.819 "send_buf_size": 4096, 00:25:42.819 "enable_recv_pipe": true, 00:25:42.819 "enable_quickack": false, 00:25:42.819 "enable_placement_id": 0, 00:25:42.819 "enable_zerocopy_send_server": true, 00:25:42.819 "enable_zerocopy_send_client": false, 00:25:42.819 "zerocopy_threshold": 0, 00:25:42.819 "tls_version": 0, 00:25:42.819 "enable_ktls": false 00:25:42.819 } 00:25:42.819 }, 00:25:42.819 { 00:25:42.819 "method": "sock_impl_set_options", 00:25:42.819 "params": { 00:25:42.819 "impl_name": "posix", 00:25:42.819 "recv_buf_size": 2097152, 00:25:42.819 "send_buf_size": 2097152, 00:25:42.819 "enable_recv_pipe": true, 00:25:42.819 "enable_quickack": false, 00:25:42.819 "enable_placement_id": 0, 00:25:42.819 "enable_zerocopy_send_server": true, 00:25:42.819 "enable_zerocopy_send_client": false, 00:25:42.819 "zerocopy_threshold": 0, 00:25:42.819 "tls_version": 0, 00:25:42.819 "enable_ktls": false 00:25:42.819 } 00:25:42.819 }, 00:25:42.819 { 00:25:42.819 "method": "sock_impl_set_options", 00:25:42.819 "params": { 00:25:42.819 "impl_name": "uring", 00:25:42.819 "recv_buf_size": 2097152, 00:25:42.819 "send_buf_size": 2097152, 00:25:42.819 "enable_recv_pipe": true, 00:25:42.819 "enable_quickack": false, 00:25:42.819 "enable_placement_id": 0, 00:25:42.819 "enable_zerocopy_send_server": false, 00:25:42.819 "enable_zerocopy_send_client": false, 00:25:42.819 "zerocopy_threshold": 0, 00:25:42.819 "tls_version": 0, 00:25:42.819 "enable_ktls": false 00:25:42.819 } 00:25:42.819 } 00:25:42.819 ] 00:25:42.819 }, 00:25:42.819 { 00:25:42.819 "subsystem": "vmd", 00:25:42.819 "config": [] 00:25:42.819 }, 00:25:42.819 { 00:25:42.819 "subsystem": "accel", 00:25:42.819 "config": [ 00:25:42.819 { 00:25:42.819 "method": "accel_set_options", 00:25:42.819 "params": { 00:25:42.819 "small_cache_size": 128, 00:25:42.819 "large_cache_size": 16, 00:25:42.819 "task_count": 2048, 00:25:42.819 "sequence_count": 2048, 00:25:42.819 "buf_count": 2048 00:25:42.819 } 00:25:42.819 } 00:25:42.819 ] 00:25:42.819 }, 00:25:42.819 { 00:25:42.819 "subsystem": "bdev", 00:25:42.819 "config": [ 00:25:42.819 { 00:25:42.819 "method": "bdev_set_options", 00:25:42.819 "params": { 00:25:42.819 "bdev_io_pool_size": 65535, 00:25:42.819 "bdev_io_cache_size": 256, 00:25:42.819 "bdev_auto_examine": true, 00:25:42.819 "iobuf_small_cache_size": 128, 00:25:42.819 "iobuf_large_cache_size": 16 00:25:42.819 } 00:25:42.819 }, 00:25:42.819 { 00:25:42.819 "method": "bdev_raid_set_options", 00:25:42.819 "params": { 00:25:42.819 "process_window_size_kb": 1024, 00:25:42.819 "process_max_bandwidth_mb_sec": 0 00:25:42.819 } 00:25:42.819 }, 00:25:42.819 { 00:25:42.819 "method": "bdev_iscsi_set_options", 00:25:42.819 "params": { 00:25:42.819 "timeout_sec": 30 00:25:42.819 } 00:25:42.819 }, 00:25:42.819 { 00:25:42.819 "method": "bdev_nvme_set_options", 00:25:42.819 "params": { 00:25:42.819 "action_on_timeout": "none", 00:25:42.819 "timeout_us": 0, 00:25:42.819 "timeout_admin_us": 0, 00:25:42.819 "keep_alive_timeout_ms": 10000, 00:25:42.819 "arbitration_burst": 0, 00:25:42.819 "low_priority_weight": 0, 00:25:42.819 "medium_priority_weight": 0, 00:25:42.819 "high_priority_weight": 0, 00:25:42.819 "nvme_adminq_poll_period_us": 10000, 00:25:42.819 "nvme_ioq_poll_period_us": 0, 00:25:42.819 "io_queue_requests": 512, 00:25:42.819 "delay_cmd_submit": true, 00:25:42.820 "transport_retry_count": 4, 00:25:42.820 "bdev_retry_count": 3, 00:25:42.820 "transport_ack_timeout": 0, 00:25:42.820 "ctrlr_loss_timeout_sec": 0, 00:25:42.820 "reconnect_delay_sec": 0, 00:25:42.820 "fast_io_fail_timeout_sec": 0, 00:25:42.820 "disable_auto_failback": false, 00:25:42.820 "generate_uuids": false, 00:25:42.820 "transport_tos": 0, 00:25:42.820 "nvme_error_stat": false, 00:25:42.820 "rdma_srq_size": 0, 00:25:42.820 "io_path_stat": false, 00:25:42.820 "allow_accel_sequence": false, 00:25:42.820 "rdma_max_cq_size": 0, 00:25:42.820 "rdma_cm_event_timeout_ms": 0, 00:25:42.820 "dhchap_digests": [ 00:25:42.820 "sha256", 00:25:42.820 "sha384", 00:25:42.820 "sha512" 00:25:42.820 ], 00:25:42.820 "dhchap_dhgroups": [ 00:25:42.820 "null", 00:25:42.820 "ffdhe2048", 00:25:42.820 "ffdhe3072", 00:25:42.820 "ffdhe4096", 00:25:42.820 "ffdhe6144", 00:25:42.820 "ffdhe8192" 00:25:42.820 ] 00:25:42.820 } 00:25:42.820 }, 00:25:42.820 { 00:25:42.820 "method": "bdev_nvme_attach_controller", 00:25:42.820 "params": { 00:25:42.820 "name": "nvme0", 00:25:42.820 "trtype": "TCP", 00:25:42.820 "adrfam": "IPv4", 00:25:42.820 "traddr": "127.0.0.1", 00:25:42.820 "trsvcid": "4420", 00:25:42.820 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:42.820 "prchk_reftag": false, 00:25:42.820 "prchk_guard": false, 00:25:42.820 "ctrlr_loss_timeout_sec": 0, 00:25:42.820 "reconnect_delay_sec": 0, 00:25:42.820 "fast_io_fail_timeout_sec": 0, 00:25:42.820 "psk": "key0", 00:25:42.820 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:42.820 "hdgst": false, 00:25:42.820 "ddgst": false, 00:25:42.820 "multipath": "multipath" 00:25:42.820 } 00:25:42.820 }, 00:25:42.820 { 00:25:42.820 "method": "bdev_nvme_set_hotplug", 00:25:42.820 "params": { 00:25:42.820 "period_us": 100000, 00:25:42.820 "enable": false 00:25:42.820 } 00:25:42.820 }, 00:25:42.820 { 00:25:42.820 "method": "bdev_wait_for_examine" 00:25:42.820 } 00:25:42.820 ] 00:25:42.820 }, 00:25:42.820 { 00:25:42.820 "subsystem": "nbd", 00:25:42.820 "config": [] 00:25:42.820 } 00:25:42.820 ] 00:25:42.820 }' 00:25:42.820 09:36:20 keyring_file -- keyring/file.sh@115 -- # killprocess 85069 00:25:42.820 09:36:20 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85069 ']' 00:25:42.820 09:36:20 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85069 00:25:42.820 09:36:20 keyring_file -- common/autotest_common.sh@959 -- # uname 00:25:42.820 09:36:20 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:42.820 09:36:20 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85069 00:25:43.082 killing process with pid 85069 00:25:43.082 Received shutdown signal, test time was about 1.000000 seconds 00:25:43.082 00:25:43.082 Latency(us) 00:25:43.082 [2024-12-09T09:36:20.805Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:43.082 [2024-12-09T09:36:20.805Z] =================================================================================================================== 00:25:43.082 [2024-12-09T09:36:20.805Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:43.082 09:36:20 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:43.082 09:36:20 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:43.082 09:36:20 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85069' 00:25:43.082 09:36:20 keyring_file -- common/autotest_common.sh@973 -- # kill 85069 00:25:43.082 09:36:20 keyring_file -- common/autotest_common.sh@978 -- # wait 85069 00:25:43.082 09:36:20 keyring_file -- keyring/file.sh@118 -- # bperfpid=85315 00:25:43.082 09:36:20 keyring_file -- keyring/file.sh@120 -- # waitforlisten 85315 /var/tmp/bperf.sock 00:25:43.082 09:36:20 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85315 ']' 00:25:43.082 09:36:20 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:43.082 09:36:20 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:25:43.082 "subsystems": [ 00:25:43.082 { 00:25:43.082 "subsystem": "keyring", 00:25:43.082 "config": [ 00:25:43.082 { 00:25:43.082 "method": "keyring_file_add_key", 00:25:43.082 "params": { 00:25:43.082 "name": "key0", 00:25:43.082 "path": "/tmp/tmp.s2QIEhUoXG" 00:25:43.082 } 00:25:43.082 }, 00:25:43.082 { 00:25:43.082 "method": "keyring_file_add_key", 00:25:43.082 "params": { 00:25:43.082 "name": "key1", 00:25:43.082 "path": "/tmp/tmp.QXoVEz3fcL" 00:25:43.082 } 00:25:43.082 } 00:25:43.082 ] 00:25:43.082 }, 00:25:43.082 { 00:25:43.082 "subsystem": "iobuf", 00:25:43.082 "config": [ 00:25:43.082 { 00:25:43.082 "method": "iobuf_set_options", 00:25:43.082 "params": { 00:25:43.082 "small_pool_count": 8192, 00:25:43.082 "large_pool_count": 1024, 00:25:43.082 "small_bufsize": 8192, 00:25:43.082 "large_bufsize": 135168, 00:25:43.082 "enable_numa": false 00:25:43.082 } 00:25:43.082 } 00:25:43.082 ] 00:25:43.082 }, 00:25:43.082 { 00:25:43.082 "subsystem": "sock", 00:25:43.082 "config": [ 00:25:43.082 { 00:25:43.082 "method": "sock_set_default_impl", 00:25:43.082 "params": { 00:25:43.082 "impl_name": "uring" 00:25:43.082 } 00:25:43.082 }, 00:25:43.082 { 00:25:43.082 "method": "sock_impl_set_options", 00:25:43.082 "params": { 00:25:43.082 "impl_name": "ssl", 00:25:43.082 "recv_buf_size": 4096, 00:25:43.082 "send_buf_size": 4096, 00:25:43.082 "enable_recv_pipe": true, 00:25:43.082 "enable_quickack": false, 00:25:43.082 "enable_placement_id": 0, 00:25:43.082 "enable_zerocopy_send_server": true, 00:25:43.082 "enable_zerocopy_send_client": false, 00:25:43.083 "zerocopy_threshold": 0, 00:25:43.083 "tls_version": 0, 00:25:43.083 "enable_ktls": false 00:25:43.083 } 00:25:43.083 }, 00:25:43.083 { 00:25:43.083 "method": "sock_impl_set_options", 00:25:43.083 "params": { 00:25:43.083 "impl_name": "posix", 00:25:43.083 "recv_buf_size": 2097152, 00:25:43.083 "send_buf_size": 2097152, 00:25:43.083 "enable_recv_pipe": true, 00:25:43.083 "enable_quickack": false, 00:25:43.083 "enable_placement_id": 0, 00:25:43.083 "enable_zerocopy_send_server": true, 00:25:43.083 "enable_zerocopy_send_client": false, 00:25:43.083 "zerocopy_threshold": 0, 00:25:43.083 "tls_version": 0, 00:25:43.083 "enable_ktls": false 00:25:43.083 } 00:25:43.083 }, 00:25:43.083 { 00:25:43.083 "method": "sock_impl_set_options", 00:25:43.083 "params": { 00:25:43.083 "impl_name": "uring", 00:25:43.083 "recv_buf_size": 2097152, 00:25:43.083 "send_buf_size": 2097152, 00:25:43.083 "enable_recv_pipe": true, 00:25:43.083 "enable_quickack": false, 00:25:43.083 "enable_placement_id": 0, 00:25:43.083 "enable_zerocopy_send_server": false, 00:25:43.083 "enable_zerocopy_send_client": false, 00:25:43.083 "zerocopy_threshold": 0, 00:25:43.083 "tls_version": 0, 00:25:43.083 "enable_ktls": false 00:25:43.083 } 00:25:43.083 } 00:25:43.083 ] 00:25:43.083 }, 00:25:43.083 { 00:25:43.083 "subsystem": "vmd", 00:25:43.083 "config": [] 00:25:43.083 }, 00:25:43.083 { 00:25:43.083 "subsystem": "accel", 00:25:43.083 "config": [ 00:25:43.083 { 00:25:43.083 "method": "accel_set_options", 00:25:43.083 "params": { 00:25:43.083 "small_cache_size": 128, 00:25:43.083 "large_cache_size": 16, 00:25:43.083 "task_count": 2048, 00:25:43.083 "sequence_count": 2048, 00:25:43.083 "buf_count": 2048 00:25:43.083 } 00:25:43.083 } 00:25:43.083 ] 00:25:43.083 }, 00:25:43.083 { 00:25:43.083 "subsystem": "bdev", 00:25:43.083 "config": [ 00:25:43.083 { 00:25:43.083 "method": "bdev_set_options", 00:25:43.083 "params": { 00:25:43.083 "bdev_io_pool_size": 65535, 00:25:43.083 "bdev_io_cache_size": 256, 00:25:43.083 "bdev_auto_examine": true, 00:25:43.083 "iobuf_small_cache_size": 128, 00:25:43.083 "iobuf_large_cache_size": 16 00:25:43.083 } 00:25:43.083 }, 00:25:43.083 { 00:25:43.083 "method": "bdev_raid_set_options", 00:25:43.083 "params": { 00:25:43.083 "process_window_size_kb": 1024, 00:25:43.083 "process_max_bandwidth_mb_sec": 0 00:25:43.083 } 00:25:43.083 }, 00:25:43.083 { 00:25:43.083 "method": "bdev_iscsi_set_options", 00:25:43.083 "params": { 00:25:43.083 "timeout_sec": 30 00:25:43.083 } 00:25:43.083 }, 00:25:43.083 { 00:25:43.083 "method": "bdev_nvme_set_options", 00:25:43.083 "params": { 00:25:43.083 "action_on_timeout": "none", 00:25:43.083 "timeout_us": 0, 00:25:43.083 "timeout_admin_us": 0, 00:25:43.083 "keep_alive_timeout_ms": 10000, 00:25:43.083 "arbitration_burst": 0, 00:25:43.083 "low_priority_weight": 0, 00:25:43.083 "medium_priority_weight": 0, 00:25:43.083 "high_priority_weight": 0, 00:25:43.083 "nvme_adminq_poll_period_us": 10000, 00:25:43.083 "nvme_ioq_poll_period_us": 0, 00:25:43.083 "io_queue_requests": 512, 00:25:43.083 "delay_cmd_submit": true, 00:25:43.083 "transport_retry_count": 4, 00:25:43.083 "bdev_retry_count": 3, 00:25:43.083 "transport_ack_timeout": 0, 00:25:43.083 "ctrlr_loss_timeout_sec": 0, 00:25:43.083 "reconnect_delay_sec": 0, 00:25:43.083 "fast_io_fail_timeout_sec": 0, 00:25:43.083 "disable_auto_failback": false, 00:25:43.083 "generate_uuids": false, 00:25:43.083 "transport_tos": 0, 00:25:43.083 "nvme_error_stat": false, 00:25:43.083 "rdma_srq_size": 0, 00:25:43.083 "io_path_stat": false, 00:25:43.083 "allow_accel_sequence": false, 00:25:43.083 "rdma_max_cq_size": 0, 00:25:43.083 "rdma_cm_event_timeout_ms": 0, 00:25:43.083 "dhchap_digests": [ 00:25:43.083 "sha256", 00:25:43.083 "sha384", 00:25:43.083 "sha512" 00:25:43.083 ], 00:25:43.083 "dhchap_dhgroups": [ 00:25:43.083 "null", 00:25:43.083 "ffdhe2048", 00:25:43.083 "ffdhe3072", 00:25:43.083 "ffdhe4096", 00:25:43.083 "ffdhe6144", 00:25:43.083 "ffdhe8192" 00:25:43.083 ] 00:25:43.083 } 00:25:43.083 }, 00:25:43.083 { 00:25:43.083 "method": "bdev_nvme_attach_controller", 00:25:43.083 "params": { 00:25:43.083 "name": "nvme0", 00:25:43.083 "trtype": "TCP", 00:25:43.083 "adrfam": "IPv4", 00:25:43.083 "traddr": "127.0.0.1", 00:25:43.083 "trsvcid": "4420", 00:25:43.083 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:43.083 "prchk_reftag": false, 00:25:43.083 "prchk_guard": false, 00:25:43.083 "ctrlr_loss_timeout_sec": 0, 00:25:43.083 "reconnect_delay_sec": 0, 00:25:43.083 "fast_io_fail_timeout_sec": 0, 00:25:43.083 "psk": "key0", 00:25:43.083 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:43.083 "hdgst": false, 00:25:43.083 "ddgst": false, 00:25:43.083 "multipath": "multipath" 00:25:43.083 } 00:25:43.083 }, 00:25:43.083 { 00:25:43.083 "method": "bdev_nvme_set_hotplug", 00:25:43.083 "params": { 00:25:43.083 "period_us": 100000, 00:25:43.083 "enable": false 00:25:43.083 } 00:25:43.083 }, 00:25:43.083 { 00:25:43.083 "method": "bdev_wait_for_examine" 00:25:43.083 } 00:25:43.083 ] 00:25:43.083 }, 00:25:43.083 { 00:25:43.083 "subsystem": "nbd", 00:25:43.083 "config": [] 00:25:43.083 } 00:25:43.083 ] 00:25:43.083 }' 00:25:43.083 09:36:20 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:43.083 09:36:20 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:25:43.083 09:36:20 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:43.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:43.083 09:36:20 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:43.083 09:36:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:43.083 [2024-12-09 09:36:20.785445] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:25:43.083 [2024-12-09 09:36:20.785533] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85315 ] 00:25:43.371 [2024-12-09 09:36:20.939094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:43.371 [2024-12-09 09:36:20.990104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:43.631 [2024-12-09 09:36:21.113859] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:43.631 [2024-12-09 09:36:21.165072] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:44.200 09:36:21 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:44.200 09:36:21 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:25:44.200 09:36:21 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:25:44.200 09:36:21 keyring_file -- keyring/file.sh@121 -- # jq length 00:25:44.200 09:36:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:44.461 09:36:21 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:25:44.461 09:36:21 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:25:44.461 09:36:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:44.461 09:36:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:44.461 09:36:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:44.461 09:36:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:44.461 09:36:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:44.461 09:36:22 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:25:44.461 09:36:22 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:25:44.461 09:36:22 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:44.720 09:36:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:44.720 09:36:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:44.720 09:36:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:44.720 09:36:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:44.720 09:36:22 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:25:44.720 09:36:22 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:25:44.720 09:36:22 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:25:44.720 09:36:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:25:44.980 09:36:22 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:25:44.980 09:36:22 keyring_file -- keyring/file.sh@1 -- # cleanup 00:25:44.980 09:36:22 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.s2QIEhUoXG /tmp/tmp.QXoVEz3fcL 00:25:44.980 09:36:22 keyring_file -- keyring/file.sh@20 -- # killprocess 85315 00:25:44.980 09:36:22 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85315 ']' 00:25:44.980 09:36:22 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85315 00:25:44.980 09:36:22 keyring_file -- common/autotest_common.sh@959 -- # uname 00:25:44.980 09:36:22 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:44.980 09:36:22 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85315 00:25:45.240 killing process with pid 85315 00:25:45.240 Received shutdown signal, test time was about 1.000000 seconds 00:25:45.240 00:25:45.240 Latency(us) 00:25:45.240 [2024-12-09T09:36:22.963Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:45.240 [2024-12-09T09:36:22.963Z] =================================================================================================================== 00:25:45.240 [2024-12-09T09:36:22.963Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:45.240 09:36:22 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:45.240 09:36:22 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:45.240 09:36:22 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85315' 00:25:45.240 09:36:22 keyring_file -- common/autotest_common.sh@973 -- # kill 85315 00:25:45.240 09:36:22 keyring_file -- common/autotest_common.sh@978 -- # wait 85315 00:25:45.240 09:36:22 keyring_file -- keyring/file.sh@21 -- # killprocess 85052 00:25:45.240 09:36:22 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85052 ']' 00:25:45.240 09:36:22 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85052 00:25:45.240 09:36:22 keyring_file -- common/autotest_common.sh@959 -- # uname 00:25:45.240 09:36:22 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:45.240 09:36:22 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85052 00:25:45.240 killing process with pid 85052 00:25:45.240 09:36:22 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:45.240 09:36:22 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:45.240 09:36:22 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85052' 00:25:45.240 09:36:22 keyring_file -- common/autotest_common.sh@973 -- # kill 85052 00:25:45.240 09:36:22 keyring_file -- common/autotest_common.sh@978 -- # wait 85052 00:25:45.809 00:25:45.809 real 0m14.523s 00:25:45.809 user 0m35.055s 00:25:45.809 sys 0m3.448s 00:25:45.809 09:36:23 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:45.809 ************************************ 00:25:45.809 09:36:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:45.809 END TEST keyring_file 00:25:45.809 ************************************ 00:25:45.809 09:36:23 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:25:45.809 09:36:23 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:25:45.809 09:36:23 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:45.809 09:36:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:45.809 09:36:23 -- common/autotest_common.sh@10 -- # set +x 00:25:45.809 ************************************ 00:25:45.809 START TEST keyring_linux 00:25:45.809 ************************************ 00:25:45.809 09:36:23 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:25:45.809 Joined session keyring: 172853254 00:25:45.809 * Looking for test storage... 00:25:45.809 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:25:45.809 09:36:23 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:45.809 09:36:23 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:25:45.809 09:36:23 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:46.070 09:36:23 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:46.070 09:36:23 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:46.070 09:36:23 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:46.070 09:36:23 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:46.070 09:36:23 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:25:46.070 09:36:23 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:25:46.070 09:36:23 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:25:46.070 09:36:23 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:25:46.070 09:36:23 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:25:46.070 09:36:23 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:25:46.070 09:36:23 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:25:46.070 09:36:23 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:46.070 09:36:23 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:25:46.070 09:36:23 keyring_linux -- scripts/common.sh@345 -- # : 1 00:25:46.070 09:36:23 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:46.070 09:36:23 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:46.070 09:36:23 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:25:46.070 09:36:23 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:25:46.070 09:36:23 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:46.070 09:36:23 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:25:46.070 09:36:23 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:25:46.070 09:36:23 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:25:46.070 09:36:23 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:25:46.070 09:36:23 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:46.070 09:36:23 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:25:46.070 09:36:23 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:25:46.070 09:36:23 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:46.070 09:36:23 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:46.070 09:36:23 keyring_linux -- scripts/common.sh@368 -- # return 0 00:25:46.070 09:36:23 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:46.070 09:36:23 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:46.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:46.070 --rc genhtml_branch_coverage=1 00:25:46.070 --rc genhtml_function_coverage=1 00:25:46.070 --rc genhtml_legend=1 00:25:46.070 --rc geninfo_all_blocks=1 00:25:46.070 --rc geninfo_unexecuted_blocks=1 00:25:46.070 00:25:46.070 ' 00:25:46.070 09:36:23 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:46.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:46.070 --rc genhtml_branch_coverage=1 00:25:46.070 --rc genhtml_function_coverage=1 00:25:46.070 --rc genhtml_legend=1 00:25:46.070 --rc geninfo_all_blocks=1 00:25:46.070 --rc geninfo_unexecuted_blocks=1 00:25:46.070 00:25:46.070 ' 00:25:46.070 09:36:23 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:46.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:46.070 --rc genhtml_branch_coverage=1 00:25:46.070 --rc genhtml_function_coverage=1 00:25:46.070 --rc genhtml_legend=1 00:25:46.070 --rc geninfo_all_blocks=1 00:25:46.070 --rc geninfo_unexecuted_blocks=1 00:25:46.070 00:25:46.070 ' 00:25:46.070 09:36:23 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:46.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:46.070 --rc genhtml_branch_coverage=1 00:25:46.070 --rc genhtml_function_coverage=1 00:25:46.070 --rc genhtml_legend=1 00:25:46.070 --rc geninfo_all_blocks=1 00:25:46.070 --rc geninfo_unexecuted_blocks=1 00:25:46.070 00:25:46.070 ' 00:25:46.070 09:36:23 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:25:46.070 09:36:23 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:46.070 09:36:23 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:25:46.070 09:36:23 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:46.070 09:36:23 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:46.070 09:36:23 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:46.070 09:36:23 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:46.070 09:36:23 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:46.070 09:36:23 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:46.070 09:36:23 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:46.070 09:36:23 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:46.070 09:36:23 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:46.070 09:36:23 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:46.070 09:36:23 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:25:46.070 09:36:23 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=e6e73585-5c6f-4b44-8d2b-f6eb11be0f68 00:25:46.070 09:36:23 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:46.070 09:36:23 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:46.070 09:36:23 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:46.070 09:36:23 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:46.070 09:36:23 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:46.070 09:36:23 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:25:46.070 09:36:23 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:46.070 09:36:23 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:46.070 09:36:23 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:46.070 09:36:23 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.070 09:36:23 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.070 09:36:23 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.070 09:36:23 keyring_linux -- paths/export.sh@5 -- # export PATH 00:25:46.070 09:36:23 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.070 09:36:23 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:25:46.070 09:36:23 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:46.070 09:36:23 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:46.070 09:36:23 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:46.070 09:36:23 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:46.070 09:36:23 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:46.070 09:36:23 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:46.070 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:46.070 09:36:23 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:46.070 09:36:23 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:46.070 09:36:23 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:46.070 09:36:23 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:25:46.070 09:36:23 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:25:46.070 09:36:23 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:25:46.070 09:36:23 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:25:46.070 09:36:23 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:25:46.070 09:36:23 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:25:46.070 09:36:23 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:25:46.070 09:36:23 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:25:46.070 09:36:23 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:25:46.070 09:36:23 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:25:46.070 09:36:23 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:25:46.070 09:36:23 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:25:46.070 09:36:23 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:25:46.070 09:36:23 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:25:46.070 09:36:23 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:25:46.070 09:36:23 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:25:46.070 09:36:23 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:25:46.070 09:36:23 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:25:46.070 09:36:23 keyring_linux -- nvmf/common.sh@733 -- # python - 00:25:46.070 09:36:23 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:25:46.070 09:36:23 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:25:46.070 /tmp/:spdk-test:key0 00:25:46.070 09:36:23 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:25:46.070 09:36:23 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:25:46.070 09:36:23 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:25:46.070 09:36:23 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:25:46.070 09:36:23 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:25:46.070 09:36:23 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:25:46.070 09:36:23 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:25:46.070 09:36:23 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:25:46.070 09:36:23 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:25:46.070 09:36:23 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:25:46.070 09:36:23 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:25:46.070 09:36:23 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:25:46.070 09:36:23 keyring_linux -- nvmf/common.sh@733 -- # python - 00:25:46.070 09:36:23 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:25:46.070 /tmp/:spdk-test:key1 00:25:46.070 09:36:23 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:25:46.070 09:36:23 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=85441 00:25:46.070 09:36:23 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:46.070 09:36:23 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 85441 00:25:46.070 09:36:23 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 85441 ']' 00:25:46.070 09:36:23 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:46.070 09:36:23 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:46.070 09:36:23 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:46.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:46.070 09:36:23 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:46.070 09:36:23 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:46.330 [2024-12-09 09:36:23.801375] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:25:46.330 [2024-12-09 09:36:23.801651] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85441 ] 00:25:46.330 [2024-12-09 09:36:23.955618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.330 [2024-12-09 09:36:24.003353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:46.589 [2024-12-09 09:36:24.062601] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:47.158 09:36:24 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:47.158 09:36:24 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:25:47.158 09:36:24 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:25:47.158 09:36:24 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.158 09:36:24 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:47.158 [2024-12-09 09:36:24.684794] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:47.158 null0 00:25:47.158 [2024-12-09 09:36:24.716689] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:47.158 [2024-12-09 09:36:24.716969] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:47.158 09:36:24 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.158 09:36:24 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:25:47.158 315737622 00:25:47.158 09:36:24 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:25:47.158 883877461 00:25:47.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:47.158 09:36:24 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=85456 00:25:47.158 09:36:24 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:25:47.158 09:36:24 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 85456 /var/tmp/bperf.sock 00:25:47.158 09:36:24 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 85456 ']' 00:25:47.158 09:36:24 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:47.158 09:36:24 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:47.158 09:36:24 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:47.158 09:36:24 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:47.158 09:36:24 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:47.158 [2024-12-09 09:36:24.802348] Starting SPDK v25.01-pre git sha1 496bfd677 / DPDK 24.03.0 initialization... 00:25:47.158 [2024-12-09 09:36:24.802589] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85456 ] 00:25:47.418 [2024-12-09 09:36:24.952907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:47.418 [2024-12-09 09:36:25.001641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:47.984 09:36:25 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:47.984 09:36:25 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:25:47.984 09:36:25 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:25:47.984 09:36:25 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:25:48.241 09:36:25 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:25:48.241 09:36:25 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:48.499 [2024-12-09 09:36:26.160300] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:48.499 09:36:26 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:25:48.499 09:36:26 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:25:48.757 [2024-12-09 09:36:26.423143] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:49.016 nvme0n1 00:25:49.016 09:36:26 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:25:49.016 09:36:26 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:25:49.016 09:36:26 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:25:49.016 09:36:26 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:25:49.016 09:36:26 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:25:49.016 09:36:26 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:49.273 09:36:26 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:25:49.273 09:36:26 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:25:49.273 09:36:26 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:25:49.273 09:36:26 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:25:49.273 09:36:26 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:49.273 09:36:26 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:25:49.273 09:36:26 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:49.530 09:36:27 keyring_linux -- keyring/linux.sh@25 -- # sn=315737622 00:25:49.530 09:36:27 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:25:49.530 09:36:27 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:25:49.530 09:36:27 keyring_linux -- keyring/linux.sh@26 -- # [[ 315737622 == \3\1\5\7\3\7\6\2\2 ]] 00:25:49.530 09:36:27 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 315737622 00:25:49.530 09:36:27 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:25:49.530 09:36:27 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:49.787 Running I/O for 1 seconds... 00:25:50.734 16805.00 IOPS, 65.64 MiB/s 00:25:50.734 Latency(us) 00:25:50.734 [2024-12-09T09:36:28.457Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:50.734 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:50.734 nvme0n1 : 1.01 16804.42 65.64 0.00 0.00 7585.52 5711.37 12475.53 00:25:50.734 [2024-12-09T09:36:28.457Z] =================================================================================================================== 00:25:50.734 [2024-12-09T09:36:28.457Z] Total : 16804.42 65.64 0.00 0.00 7585.52 5711.37 12475.53 00:25:50.734 { 00:25:50.734 "results": [ 00:25:50.734 { 00:25:50.734 "job": "nvme0n1", 00:25:50.734 "core_mask": "0x2", 00:25:50.734 "workload": "randread", 00:25:50.734 "status": "finished", 00:25:50.734 "queue_depth": 128, 00:25:50.734 "io_size": 4096, 00:25:50.734 "runtime": 1.007711, 00:25:50.735 "iops": 16804.421108829814, 00:25:50.735 "mibps": 65.64226995636646, 00:25:50.735 "io_failed": 0, 00:25:50.735 "io_timeout": 0, 00:25:50.735 "avg_latency_us": 7585.52288928953, 00:25:50.735 "min_latency_us": 5711.370281124498, 00:25:50.735 "max_latency_us": 12475.527710843373 00:25:50.735 } 00:25:50.735 ], 00:25:50.735 "core_count": 1 00:25:50.735 } 00:25:50.735 09:36:28 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:25:50.735 09:36:28 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:25:50.992 09:36:28 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:25:50.992 09:36:28 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:25:50.992 09:36:28 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:25:50.992 09:36:28 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:25:50.992 09:36:28 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:25:50.992 09:36:28 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:51.250 09:36:28 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:25:51.250 09:36:28 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:25:51.250 09:36:28 keyring_linux -- keyring/linux.sh@23 -- # return 00:25:51.250 09:36:28 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:51.250 09:36:28 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:25:51.250 09:36:28 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:51.250 09:36:28 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:25:51.250 09:36:28 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:51.250 09:36:28 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:25:51.250 09:36:28 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:51.250 09:36:28 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:51.250 09:36:28 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:51.509 [2024-12-09 09:36:29.032651] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:51.509 [2024-12-09 09:36:29.033009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13501d0 (107): Transport endpoint is not connected 00:25:51.509 [2024-12-09 09:36:29.033998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13501d0 (9): Bad file descriptor 00:25:51.509 [2024-12-09 09:36:29.034995] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:25:51.509 [2024-12-09 09:36:29.035011] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:25:51.509 [2024-12-09 09:36:29.035021] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:25:51.509 [2024-12-09 09:36:29.035032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:25:51.509 request: 00:25:51.509 { 00:25:51.509 "name": "nvme0", 00:25:51.509 "trtype": "tcp", 00:25:51.509 "traddr": "127.0.0.1", 00:25:51.509 "adrfam": "ipv4", 00:25:51.509 "trsvcid": "4420", 00:25:51.509 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:51.509 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:51.509 "prchk_reftag": false, 00:25:51.509 "prchk_guard": false, 00:25:51.509 "hdgst": false, 00:25:51.509 "ddgst": false, 00:25:51.509 "psk": ":spdk-test:key1", 00:25:51.509 "allow_unrecognized_csi": false, 00:25:51.509 "method": "bdev_nvme_attach_controller", 00:25:51.509 "req_id": 1 00:25:51.509 } 00:25:51.509 Got JSON-RPC error response 00:25:51.509 response: 00:25:51.509 { 00:25:51.509 "code": -5, 00:25:51.509 "message": "Input/output error" 00:25:51.509 } 00:25:51.509 09:36:29 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:25:51.509 09:36:29 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:51.509 09:36:29 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:51.509 09:36:29 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:51.509 09:36:29 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:25:51.509 09:36:29 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:25:51.509 09:36:29 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:25:51.509 09:36:29 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:25:51.509 09:36:29 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:25:51.509 09:36:29 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:25:51.509 09:36:29 keyring_linux -- keyring/linux.sh@33 -- # sn=315737622 00:25:51.509 09:36:29 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 315737622 00:25:51.509 1 links removed 00:25:51.509 09:36:29 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:25:51.509 09:36:29 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:25:51.509 09:36:29 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:25:51.509 09:36:29 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:25:51.509 09:36:29 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:25:51.509 09:36:29 keyring_linux -- keyring/linux.sh@33 -- # sn=883877461 00:25:51.509 09:36:29 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 883877461 00:25:51.509 1 links removed 00:25:51.509 09:36:29 keyring_linux -- keyring/linux.sh@41 -- # killprocess 85456 00:25:51.509 09:36:29 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 85456 ']' 00:25:51.509 09:36:29 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 85456 00:25:51.509 09:36:29 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:25:51.509 09:36:29 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:51.509 09:36:29 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85456 00:25:51.509 killing process with pid 85456 00:25:51.509 Received shutdown signal, test time was about 1.000000 seconds 00:25:51.509 00:25:51.509 Latency(us) 00:25:51.509 [2024-12-09T09:36:29.232Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:51.509 [2024-12-09T09:36:29.232Z] =================================================================================================================== 00:25:51.509 [2024-12-09T09:36:29.232Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:51.509 09:36:29 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:51.509 09:36:29 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:51.509 09:36:29 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85456' 00:25:51.509 09:36:29 keyring_linux -- common/autotest_common.sh@973 -- # kill 85456 00:25:51.509 09:36:29 keyring_linux -- common/autotest_common.sh@978 -- # wait 85456 00:25:51.767 09:36:29 keyring_linux -- keyring/linux.sh@42 -- # killprocess 85441 00:25:51.767 09:36:29 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 85441 ']' 00:25:51.767 09:36:29 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 85441 00:25:51.767 09:36:29 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:25:51.767 09:36:29 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:51.767 09:36:29 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85441 00:25:51.767 killing process with pid 85441 00:25:51.767 09:36:29 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:51.767 09:36:29 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:51.767 09:36:29 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85441' 00:25:51.767 09:36:29 keyring_linux -- common/autotest_common.sh@973 -- # kill 85441 00:25:51.767 09:36:29 keyring_linux -- common/autotest_common.sh@978 -- # wait 85441 00:25:52.027 ************************************ 00:25:52.027 END TEST keyring_linux 00:25:52.027 ************************************ 00:25:52.027 00:25:52.027 real 0m6.338s 00:25:52.027 user 0m11.932s 00:25:52.027 sys 0m1.783s 00:25:52.027 09:36:29 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:52.027 09:36:29 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:52.027 09:36:29 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:25:52.027 09:36:29 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:25:52.027 09:36:29 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:25:52.027 09:36:29 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:25:52.027 09:36:29 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:25:52.027 09:36:29 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:25:52.027 09:36:29 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:25:52.027 09:36:29 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:25:52.027 09:36:29 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:25:52.027 09:36:29 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:25:52.027 09:36:29 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:25:52.027 09:36:29 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:25:52.027 09:36:29 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:25:52.027 09:36:29 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:25:52.027 09:36:29 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:25:52.027 09:36:29 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:25:52.027 09:36:29 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:25:52.027 09:36:29 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:52.027 09:36:29 -- common/autotest_common.sh@10 -- # set +x 00:25:52.287 09:36:29 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:25:52.287 09:36:29 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:25:52.287 09:36:29 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:25:52.287 09:36:29 -- common/autotest_common.sh@10 -- # set +x 00:25:54.822 INFO: APP EXITING 00:25:54.822 INFO: killing all VMs 00:25:54.822 INFO: killing vhost app 00:25:54.822 INFO: EXIT DONE 00:25:55.391 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:55.391 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:25:55.650 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:25:56.588 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:56.588 Cleaning 00:25:56.588 Removing: /var/run/dpdk/spdk0/config 00:25:56.588 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:25:56.588 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:25:56.588 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:25:56.588 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:25:56.588 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:25:56.588 Removing: /var/run/dpdk/spdk0/hugepage_info 00:25:56.588 Removing: /var/run/dpdk/spdk1/config 00:25:56.588 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:25:56.588 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:25:56.588 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:25:56.588 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:25:56.588 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:25:56.588 Removing: /var/run/dpdk/spdk1/hugepage_info 00:25:56.588 Removing: /var/run/dpdk/spdk2/config 00:25:56.588 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:25:56.588 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:25:56.588 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:25:56.588 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:25:56.588 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:25:56.588 Removing: /var/run/dpdk/spdk2/hugepage_info 00:25:56.589 Removing: /var/run/dpdk/spdk3/config 00:25:56.589 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:25:56.589 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:25:56.589 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:25:56.589 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:25:56.589 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:25:56.589 Removing: /var/run/dpdk/spdk3/hugepage_info 00:25:56.589 Removing: /var/run/dpdk/spdk4/config 00:25:56.589 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:25:56.589 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:25:56.589 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:25:56.589 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:25:56.589 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:25:56.589 Removing: /var/run/dpdk/spdk4/hugepage_info 00:25:56.589 Removing: /dev/shm/nvmf_trace.0 00:25:56.589 Removing: /dev/shm/spdk_tgt_trace.pid56694 00:25:56.589 Removing: /var/run/dpdk/spdk0 00:25:56.589 Removing: /var/run/dpdk/spdk1 00:25:56.589 Removing: /var/run/dpdk/spdk2 00:25:56.589 Removing: /var/run/dpdk/spdk3 00:25:56.589 Removing: /var/run/dpdk/spdk4 00:25:56.589 Removing: /var/run/dpdk/spdk_pid56541 00:25:56.589 Removing: /var/run/dpdk/spdk_pid56694 00:25:56.589 Removing: /var/run/dpdk/spdk_pid56906 00:25:56.589 Removing: /var/run/dpdk/spdk_pid56987 00:25:56.589 Removing: /var/run/dpdk/spdk_pid57014 00:25:56.589 Removing: /var/run/dpdk/spdk_pid57124 00:25:56.589 Removing: /var/run/dpdk/spdk_pid57142 00:25:56.589 Removing: /var/run/dpdk/spdk_pid57276 00:25:56.589 Removing: /var/run/dpdk/spdk_pid57466 00:25:56.589 Removing: /var/run/dpdk/spdk_pid57620 00:25:56.589 Removing: /var/run/dpdk/spdk_pid57698 00:25:56.589 Removing: /var/run/dpdk/spdk_pid57776 00:25:56.589 Removing: /var/run/dpdk/spdk_pid57870 00:25:56.589 Removing: /var/run/dpdk/spdk_pid57955 00:25:56.589 Removing: /var/run/dpdk/spdk_pid57988 00:25:56.589 Removing: /var/run/dpdk/spdk_pid58029 00:25:56.589 Removing: /var/run/dpdk/spdk_pid58093 00:25:56.589 Removing: /var/run/dpdk/spdk_pid58209 00:25:56.848 Removing: /var/run/dpdk/spdk_pid58639 00:25:56.848 Removing: /var/run/dpdk/spdk_pid58685 00:25:56.848 Removing: /var/run/dpdk/spdk_pid58731 00:25:56.848 Removing: /var/run/dpdk/spdk_pid58747 00:25:56.848 Removing: /var/run/dpdk/spdk_pid58814 00:25:56.848 Removing: /var/run/dpdk/spdk_pid58830 00:25:56.848 Removing: /var/run/dpdk/spdk_pid58891 00:25:56.848 Removing: /var/run/dpdk/spdk_pid58902 00:25:56.848 Removing: /var/run/dpdk/spdk_pid58953 00:25:56.848 Removing: /var/run/dpdk/spdk_pid58971 00:25:56.848 Removing: /var/run/dpdk/spdk_pid59011 00:25:56.848 Removing: /var/run/dpdk/spdk_pid59029 00:25:56.848 Removing: /var/run/dpdk/spdk_pid59154 00:25:56.848 Removing: /var/run/dpdk/spdk_pid59195 00:25:56.848 Removing: /var/run/dpdk/spdk_pid59272 00:25:56.848 Removing: /var/run/dpdk/spdk_pid59616 00:25:56.848 Removing: /var/run/dpdk/spdk_pid59629 00:25:56.848 Removing: /var/run/dpdk/spdk_pid59660 00:25:56.848 Removing: /var/run/dpdk/spdk_pid59674 00:25:56.848 Removing: /var/run/dpdk/spdk_pid59689 00:25:56.848 Removing: /var/run/dpdk/spdk_pid59708 00:25:56.848 Removing: /var/run/dpdk/spdk_pid59726 00:25:56.848 Removing: /var/run/dpdk/spdk_pid59737 00:25:56.848 Removing: /var/run/dpdk/spdk_pid59756 00:25:56.848 Removing: /var/run/dpdk/spdk_pid59775 00:25:56.848 Removing: /var/run/dpdk/spdk_pid59785 00:25:56.848 Removing: /var/run/dpdk/spdk_pid59804 00:25:56.848 Removing: /var/run/dpdk/spdk_pid59823 00:25:56.848 Removing: /var/run/dpdk/spdk_pid59833 00:25:56.848 Removing: /var/run/dpdk/spdk_pid59852 00:25:56.848 Removing: /var/run/dpdk/spdk_pid59870 00:25:56.848 Removing: /var/run/dpdk/spdk_pid59883 00:25:56.848 Removing: /var/run/dpdk/spdk_pid59902 00:25:56.848 Removing: /var/run/dpdk/spdk_pid59916 00:25:56.848 Removing: /var/run/dpdk/spdk_pid59931 00:25:56.848 Removing: /var/run/dpdk/spdk_pid59967 00:25:56.848 Removing: /var/run/dpdk/spdk_pid59981 00:25:56.848 Removing: /var/run/dpdk/spdk_pid60010 00:25:56.848 Removing: /var/run/dpdk/spdk_pid60082 00:25:56.848 Removing: /var/run/dpdk/spdk_pid60111 00:25:56.848 Removing: /var/run/dpdk/spdk_pid60120 00:25:56.848 Removing: /var/run/dpdk/spdk_pid60149 00:25:56.848 Removing: /var/run/dpdk/spdk_pid60157 00:25:56.848 Removing: /var/run/dpdk/spdk_pid60167 00:25:56.848 Removing: /var/run/dpdk/spdk_pid60204 00:25:56.848 Removing: /var/run/dpdk/spdk_pid60223 00:25:56.848 Removing: /var/run/dpdk/spdk_pid60246 00:25:56.848 Removing: /var/run/dpdk/spdk_pid60261 00:25:56.848 Removing: /var/run/dpdk/spdk_pid60265 00:25:56.848 Removing: /var/run/dpdk/spdk_pid60274 00:25:56.848 Removing: /var/run/dpdk/spdk_pid60284 00:25:56.848 Removing: /var/run/dpdk/spdk_pid60293 00:25:56.848 Removing: /var/run/dpdk/spdk_pid60303 00:25:56.848 Removing: /var/run/dpdk/spdk_pid60312 00:25:56.848 Removing: /var/run/dpdk/spdk_pid60341 00:25:56.848 Removing: /var/run/dpdk/spdk_pid60367 00:25:56.848 Removing: /var/run/dpdk/spdk_pid60377 00:25:56.848 Removing: /var/run/dpdk/spdk_pid60405 00:25:56.848 Removing: /var/run/dpdk/spdk_pid60415 00:25:56.848 Removing: /var/run/dpdk/spdk_pid60417 00:25:56.848 Removing: /var/run/dpdk/spdk_pid60465 00:25:56.848 Removing: /var/run/dpdk/spdk_pid60471 00:25:56.848 Removing: /var/run/dpdk/spdk_pid60503 00:25:56.848 Removing: /var/run/dpdk/spdk_pid60509 00:25:57.107 Removing: /var/run/dpdk/spdk_pid60518 00:25:57.107 Removing: /var/run/dpdk/spdk_pid60520 00:25:57.107 Removing: /var/run/dpdk/spdk_pid60533 00:25:57.107 Removing: /var/run/dpdk/spdk_pid60537 00:25:57.107 Removing: /var/run/dpdk/spdk_pid60550 00:25:57.107 Removing: /var/run/dpdk/spdk_pid60552 00:25:57.107 Removing: /var/run/dpdk/spdk_pid60634 00:25:57.107 Removing: /var/run/dpdk/spdk_pid60676 00:25:57.107 Removing: /var/run/dpdk/spdk_pid60784 00:25:57.107 Removing: /var/run/dpdk/spdk_pid60822 00:25:57.107 Removing: /var/run/dpdk/spdk_pid60862 00:25:57.107 Removing: /var/run/dpdk/spdk_pid60876 00:25:57.107 Removing: /var/run/dpdk/spdk_pid60898 00:25:57.107 Removing: /var/run/dpdk/spdk_pid60913 00:25:57.107 Removing: /var/run/dpdk/spdk_pid60944 00:25:57.107 Removing: /var/run/dpdk/spdk_pid60965 00:25:57.107 Removing: /var/run/dpdk/spdk_pid61043 00:25:57.107 Removing: /var/run/dpdk/spdk_pid61059 00:25:57.107 Removing: /var/run/dpdk/spdk_pid61098 00:25:57.107 Removing: /var/run/dpdk/spdk_pid61172 00:25:57.107 Removing: /var/run/dpdk/spdk_pid61228 00:25:57.107 Removing: /var/run/dpdk/spdk_pid61251 00:25:57.107 Removing: /var/run/dpdk/spdk_pid61352 00:25:57.107 Removing: /var/run/dpdk/spdk_pid61396 00:25:57.107 Removing: /var/run/dpdk/spdk_pid61427 00:25:57.107 Removing: /var/run/dpdk/spdk_pid61659 00:25:57.107 Removing: /var/run/dpdk/spdk_pid61751 00:25:57.107 Removing: /var/run/dpdk/spdk_pid61785 00:25:57.107 Removing: /var/run/dpdk/spdk_pid61809 00:25:57.107 Removing: /var/run/dpdk/spdk_pid61850 00:25:57.107 Removing: /var/run/dpdk/spdk_pid61882 00:25:57.108 Removing: /var/run/dpdk/spdk_pid61915 00:25:57.108 Removing: /var/run/dpdk/spdk_pid61952 00:25:57.108 Removing: /var/run/dpdk/spdk_pid62345 00:25:57.108 Removing: /var/run/dpdk/spdk_pid62383 00:25:57.108 Removing: /var/run/dpdk/spdk_pid62733 00:25:57.108 Removing: /var/run/dpdk/spdk_pid63189 00:25:57.108 Removing: /var/run/dpdk/spdk_pid63459 00:25:57.108 Removing: /var/run/dpdk/spdk_pid64312 00:25:57.108 Removing: /var/run/dpdk/spdk_pid65240 00:25:57.108 Removing: /var/run/dpdk/spdk_pid65363 00:25:57.108 Removing: /var/run/dpdk/spdk_pid65425 00:25:57.108 Removing: /var/run/dpdk/spdk_pid66863 00:25:57.108 Removing: /var/run/dpdk/spdk_pid67184 00:25:57.108 Removing: /var/run/dpdk/spdk_pid70616 00:25:57.108 Removing: /var/run/dpdk/spdk_pid70973 00:25:57.108 Removing: /var/run/dpdk/spdk_pid71082 00:25:57.108 Removing: /var/run/dpdk/spdk_pid71215 00:25:57.108 Removing: /var/run/dpdk/spdk_pid71245 00:25:57.108 Removing: /var/run/dpdk/spdk_pid71270 00:25:57.108 Removing: /var/run/dpdk/spdk_pid71302 00:25:57.108 Removing: /var/run/dpdk/spdk_pid71396 00:25:57.108 Removing: /var/run/dpdk/spdk_pid71532 00:25:57.108 Removing: /var/run/dpdk/spdk_pid71691 00:25:57.108 Removing: /var/run/dpdk/spdk_pid71767 00:25:57.108 Removing: /var/run/dpdk/spdk_pid71960 00:25:57.108 Removing: /var/run/dpdk/spdk_pid72039 00:25:57.108 Removing: /var/run/dpdk/spdk_pid72127 00:25:57.367 Removing: /var/run/dpdk/spdk_pid72487 00:25:57.367 Removing: /var/run/dpdk/spdk_pid72917 00:25:57.367 Removing: /var/run/dpdk/spdk_pid72918 00:25:57.367 Removing: /var/run/dpdk/spdk_pid72919 00:25:57.367 Removing: /var/run/dpdk/spdk_pid73192 00:25:57.367 Removing: /var/run/dpdk/spdk_pid73463 00:25:57.367 Removing: /var/run/dpdk/spdk_pid73854 00:25:57.367 Removing: /var/run/dpdk/spdk_pid73862 00:25:57.367 Removing: /var/run/dpdk/spdk_pid74188 00:25:57.367 Removing: /var/run/dpdk/spdk_pid74208 00:25:57.367 Removing: /var/run/dpdk/spdk_pid74222 00:25:57.367 Removing: /var/run/dpdk/spdk_pid74253 00:25:57.367 Removing: /var/run/dpdk/spdk_pid74258 00:25:57.367 Removing: /var/run/dpdk/spdk_pid74617 00:25:57.367 Removing: /var/run/dpdk/spdk_pid74666 00:25:57.367 Removing: /var/run/dpdk/spdk_pid74996 00:25:57.367 Removing: /var/run/dpdk/spdk_pid75201 00:25:57.367 Removing: /var/run/dpdk/spdk_pid75634 00:25:57.367 Removing: /var/run/dpdk/spdk_pid76190 00:25:57.367 Removing: /var/run/dpdk/spdk_pid77035 00:25:57.367 Removing: /var/run/dpdk/spdk_pid77686 00:25:57.367 Removing: /var/run/dpdk/spdk_pid77688 00:25:57.367 Removing: /var/run/dpdk/spdk_pid79719 00:25:57.367 Removing: /var/run/dpdk/spdk_pid79778 00:25:57.367 Removing: /var/run/dpdk/spdk_pid79834 00:25:57.367 Removing: /var/run/dpdk/spdk_pid79895 00:25:57.367 Removing: /var/run/dpdk/spdk_pid80009 00:25:57.367 Removing: /var/run/dpdk/spdk_pid80065 00:25:57.367 Removing: /var/run/dpdk/spdk_pid80125 00:25:57.367 Removing: /var/run/dpdk/spdk_pid80181 00:25:57.367 Removing: /var/run/dpdk/spdk_pid80556 00:25:57.367 Removing: /var/run/dpdk/spdk_pid81771 00:25:57.367 Removing: /var/run/dpdk/spdk_pid81906 00:25:57.367 Removing: /var/run/dpdk/spdk_pid82148 00:25:57.367 Removing: /var/run/dpdk/spdk_pid82763 00:25:57.367 Removing: /var/run/dpdk/spdk_pid82924 00:25:57.367 Removing: /var/run/dpdk/spdk_pid83087 00:25:57.367 Removing: /var/run/dpdk/spdk_pid83186 00:25:57.367 Removing: /var/run/dpdk/spdk_pid83352 00:25:57.367 Removing: /var/run/dpdk/spdk_pid83461 00:25:57.367 Removing: /var/run/dpdk/spdk_pid84181 00:25:57.367 Removing: /var/run/dpdk/spdk_pid84217 00:25:57.367 Removing: /var/run/dpdk/spdk_pid84252 00:25:57.367 Removing: /var/run/dpdk/spdk_pid84513 00:25:57.367 Removing: /var/run/dpdk/spdk_pid84547 00:25:57.367 Removing: /var/run/dpdk/spdk_pid84582 00:25:57.367 Removing: /var/run/dpdk/spdk_pid85052 00:25:57.367 Removing: /var/run/dpdk/spdk_pid85069 00:25:57.367 Removing: /var/run/dpdk/spdk_pid85315 00:25:57.367 Removing: /var/run/dpdk/spdk_pid85441 00:25:57.367 Removing: /var/run/dpdk/spdk_pid85456 00:25:57.367 Clean 00:25:57.627 09:36:35 -- common/autotest_common.sh@1453 -- # return 0 00:25:57.627 09:36:35 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:25:57.627 09:36:35 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:57.627 09:36:35 -- common/autotest_common.sh@10 -- # set +x 00:25:57.627 09:36:35 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:25:57.627 09:36:35 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:57.627 09:36:35 -- common/autotest_common.sh@10 -- # set +x 00:25:57.627 09:36:35 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:57.627 09:36:35 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:25:57.627 09:36:35 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:25:57.627 09:36:35 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:25:57.627 09:36:35 -- spdk/autotest.sh@398 -- # hostname 00:25:57.627 09:36:35 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:25:57.885 geninfo: WARNING: invalid characters removed from testname! 00:26:24.449 09:37:00 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:26.353 09:37:04 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:28.902 09:37:06 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:31.440 09:37:08 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:33.345 09:37:10 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:35.243 09:37:12 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:37.787 09:37:14 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:26:37.787 09:37:14 -- spdk/autorun.sh@1 -- $ timing_finish 00:26:37.787 09:37:14 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:26:37.787 09:37:14 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:26:37.787 09:37:14 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:26:37.787 09:37:14 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:26:37.787 + [[ -n 5217 ]] 00:26:37.787 + sudo kill 5217 00:26:37.795 [Pipeline] } 00:26:37.806 [Pipeline] // timeout 00:26:37.810 [Pipeline] } 00:26:37.822 [Pipeline] // stage 00:26:37.826 [Pipeline] } 00:26:37.839 [Pipeline] // catchError 00:26:37.846 [Pipeline] stage 00:26:37.848 [Pipeline] { (Stop VM) 00:26:37.856 [Pipeline] sh 00:26:38.135 + vagrant halt 00:26:41.423 ==> default: Halting domain... 00:26:48.004 [Pipeline] sh 00:26:48.285 + vagrant destroy -f 00:26:51.572 ==> default: Removing domain... 00:26:51.582 [Pipeline] sh 00:26:51.860 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:26:51.868 [Pipeline] } 00:26:51.882 [Pipeline] // stage 00:26:51.886 [Pipeline] } 00:26:51.899 [Pipeline] // dir 00:26:51.903 [Pipeline] } 00:26:51.917 [Pipeline] // wrap 00:26:51.923 [Pipeline] } 00:26:51.935 [Pipeline] // catchError 00:26:51.943 [Pipeline] stage 00:26:51.945 [Pipeline] { (Epilogue) 00:26:51.956 [Pipeline] sh 00:26:52.239 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:26:58.821 [Pipeline] catchError 00:26:58.824 [Pipeline] { 00:26:58.836 [Pipeline] sh 00:26:59.120 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:26:59.120 Artifacts sizes are good 00:26:59.128 [Pipeline] } 00:26:59.143 [Pipeline] // catchError 00:26:59.155 [Pipeline] archiveArtifacts 00:26:59.160 Archiving artifacts 00:26:59.301 [Pipeline] cleanWs 00:26:59.316 [WS-CLEANUP] Deleting project workspace... 00:26:59.317 [WS-CLEANUP] Deferred wipeout is used... 00:26:59.323 [WS-CLEANUP] done 00:26:59.325 [Pipeline] } 00:26:59.340 [Pipeline] // stage 00:26:59.346 [Pipeline] } 00:26:59.360 [Pipeline] // node 00:26:59.366 [Pipeline] End of Pipeline 00:26:59.402 Finished: SUCCESS